title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Debiasing Scores and Prompts of 2D Diffusion for View-consistent Text-to-3D Generation | Accept (poster) | Summary: The paper proposes two simple methods to solve the widely known Janus problem in zero-shot text-to-3D generation, which is an essential issue. The proposed methods are intuitive and simple. They are more like optimization tricks instead of sysmetic formulations. From the qualitative results, the improvement of the proposed methods is not obvious.
Strengths: 1)The motivation is persuasive because the Janus problem is very important in text-to-3D generation.
2)The paper proposes two novel strategies of debiasing the score-distillation from 2D diffusion models, solving the Janus problem widely existing in zero-shot text-to-3D generation.
3)The first strategy performs dynamic clipping of 2D-to-3D scores to eliminate the biases towards some viewing directions, which solves the problems of additional legs, beaks, and horns.
4)The second strategy uses a pre-trained LLM to identify and remove the conflict words with the view points.
Weaknesses: 1)Technically, the novelty is a little weak.
2)The paper proposes to debias scores and prompts of 2D text-image diffusion models. But these two debiasing methods are simply integrated for text-3D generation without elegant co-formulation.
3)Even with sophisticated mathematical formulations, the actual debiasing score method is very intuitive and simple.
4)The prompt debiasing method is also very simple and intuitive. What is the difference between the proposed method and using the CLIP similarity scores to remove conflict words?
5)The qualitative comparisons in Figure 6 and 7 do not show much superiority of the proposed method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Qualitative results.** The reviewer mentioned that the qualitative comparisons in Figs. 6 and 7 do not show a clear improvement of the proposed method. However, we would like to note that in the supplementary material, we have provided more examples (Figs. 1 and 3) and images from 360-degree angles (Fig. 2) to better demonstrate the improvements in terms of the Janus problem. If the reviewer regards these comparisons with SJC [Wang et al., 2023] as unappealing, we have also provided a PDF file in the general response. This file includes the results of integrating our method into a variety of concurrent methods that use score distillation for text-to-3D models, such as Magic3D [Lin et al., 2023] (Fig. 3), DreamFusion [Poole et al., 2023] (Fig. 2), and ProlificDreamer [Wang et al., 2023] (Fig. 1), showing clear improvements in the Janus problem. We kindly direct the reviewer to the supplementary material and the general response for more information.
**Novelty and simplicity.** The reviewer has expressed concerns regarding the perceived lack of novelty in our proposed methods. However, to the best of our knowledge, ours is the pioneering effort that directly addresses the Janus problem, and it is also the first to challenge the intrinsic bias present in scores and prompts within text-to-3D frameworks. Technically, we've leveraged a language model trained with an MLM objective and seamlessly integrated it into our framework for detecting contradictions, utilizing its probability output (Section 4.2). Additionally, the dynamic clipping of 2D-to-3D scores is not only deeply rooted in a coarse-to-fine strategy (Section 4.1) but has also demonstrated its effectiveness, as validated in the main paper Fig. 8, supplementary material A.2, and general response PDF Fig. 4. We believe that simplicity in research is not a detriment. When a straightforward method addresses an issue, it stands a better chance of being integrated into diverse frameworks due to its clarity, ease of implementation, and adaptability. However, we acknowledge the reviewer's concern and sincerely appreciate the thorough review.
**Intuitiveness and simple integration.** We acknowledge that our two debiasing methods might seem simple and intuitive. In fact, in Section 4, we presented our methods in such an intuitive manner to clearly convey our motivation. We would like to emphasize the value of this intuitiveness and simplicity in their explanations; they are both explainable and adaptable wherever this intuition is applicable, notably in any text-to-3D framework that utilizes scores from text-to-image diffusion models. Moreover, our methodologies aim to solve separate problems stemming from a fundamental formulation in Section 3. Specifically, as discussed in Section 4.1, the score debiasing method offers a novel interpretation of the scores as outlined in Eq. 2, conceptualizing their magnitude as a (scaled) deviation from the rendered image of a 3D field. This approach directly suppresses deviations that ignore either viewpoint or geometry, addressing the intrinsic bias of score-based models detailed in Section 3. Concurrently, prompt debiasing arises from the need to reduce the contradiction between view prompts and user prompts, as detailed in Section 3. To address this effectively, we employed an MLM-pretrained model to calculate conditional probabilities, identify contradictions, and remove them from view-augmented prompts. Apparently, addressing both problems is necessary and complementary.
**Prompt debiasing with CLIP.** The reviewer questions the difference between the proposed prompt debiasing method and the use of CLIP similarity scores to remove conflicting words. However, CLIP was designed for vision-language contrastive learning, not for calculating conditional probabilities between words. Therefore, it may not be straightforward to use CLIP similarity scores for this purpose. In contrast, our proposed method indeed calculates the desired conditional probability of a word's occurrence based on the formulation of masked language modeling. In fact, in the early stages of our research, we tried using CLIP similarity between word embeddings but found it unsuccessful, leading us to develop a more novel and sound method: the current prompt debiasing.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023.
[Wang et al., 2023] Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, CVPR 2023.
[Lin et al., 2023] Magic3D: High-Resolution Text-to-3D Content Creation, CVPR 2023.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv preprint.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed explanation of my concerns. I keep my initial score.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you very much for your comments! We appreciate them and will further refine our paper. | Summary: This paper proposes two approaches to debias the score-distillation frameworks for view-consistent text-to-3D generation. The first approach is called score debiasing, involves cutting off the score estimated by 2D diffusion models and gradually increasing the truncation value throughout the optimization process. The second approach, called prompt debiasing, identifies conflicting words be tween user prompts and view prompts using a language model, and adjusts the discrepancy between view prompts and the viewing direction of an object. The proposed results is demonstrated by experiments.
Strengths: 1. This paper is clearly written.
2. The proposed method is technically sound and insightful.
3. Some experiments are conducted to demonstrate the idea.
Weaknesses: 1. The experiment is not sufficient, as discussed in Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. My main concern is about the evaluation. Since the experiments are only conducted with 70 prompts, could you report the success rate of generation, i.e., how many generated results of the 70 prompts does not have Janus problem? This may be more convincing than the metrics used in the paper.
2. I have tried the score debiasing method in Stable-DreamFusion repo. However, based on my observation, the Janus problems are not alleviated. Is it true that the improvements is actually from Prompt Debiasing, and the improvement of Score Debiasing is quite marginal? If it is not true, could you provide an experiment solely on Score Debiasing and report the improvements of success rate of generation? (Success = Without Janus problem, Fail = With Janus problem.)
I am glad to raise my score if the concerns are resolved.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. Since the results are based on NeRF with latent space rendering, the quality of generated 3D results is lower than current SOTA. Could this method be applied to other 3D representations, like DMTet, which demonstrates better quality than NeRF as is shown in Magic3D? Could this method be applied to a NeRF that renders in rgb space which demontrates stronger 3D consistency?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Success rate.** The reviewer raised a valid suggestion regarding the evaluation, especially concerning the success rate of generation. We concur that quantifying the number of generated results from the 70 prompts that do not exhibit the Janus problem would indeed provide compelling evidence. In light of this, as well as the user study in the supplementary material that evaluated the view consistency, faithfulness to user prompts, and overall quality, we have also conducted the evaluation the reviewer suggested, as presented below. The success rate applies to 41 out of 70 prompts, which feature countable faces, and we marked only those objects as successful that do not exhibit the Janus problem, i.e., those with the accurate number of faces. This experiment particularly corroborates our effectiveness in addressing the Janus problem, and we will include these results in a revised version of the paper.
| | Success-SJC | Success-Ours |
|---------------|-------------|--------------|
| **Success %** | 29.3% | 68.3% |
**Effectiveness of score debiasing.** We acknowledge that the reviewer has experienced limited success in applying the score debiasing technique from the unofficial Stable-DreamFusion repository. It is worth noting that the effectiveness of score debiasing can vary depending on the baseline. As mentioned in our paper, if the 2D diffusion prior generates a severely deformed 3D field, our method might not fully address these issues. Nevertheless, to showcase the improvement achieved by applying score debiasing to the DreamFusion framework [Poole et al., 2023], we conducted an experiment only applying score debiasing (Fig. 2 in the general response). This experiment used the same method as proposed in DreamFusion and utilized DeepFloyd-IF, an RGB-pixel-based diffusion model similar to Imagen [Saharia et al, 2022]. It's important to note that, due to the scale differences between the RGB scores and latent scores, we adjusted our score debiasing threshold for RGB scores to one fourth of the value proposed in the paper. In the experiment with the high-performing framework, we see that the geometry issue decreases significantly when using only score debiasing.
**Application on other 3D representations and NeRF in RGB space.** The method we have proposed, while tested with NeRF and latent space rendering, is fundamentally a technique applicable for a variety of text-to-3D generation frameworks using score distillation. As such, we believe it could indeed be applied to other 3D representations like DMTet or to a NeRF that renders in RGB space. To address this concern, in general response, we have provided a PDF file including results which demonstrate the applicability of our method to recent frameworks such as Magic3D (using DMTet) [Lin et al., 2023], DreamFusion (using NeRF in RGB space), and ProlificDreamer (using VSD) [Wang et al., 2023]. We kindly direct readers to the general response for more information.
[Saharia et al., 2022] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, NeurIPS 2022.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023.
[Lin et al., 2023] Magic3D: High-Resolution Text-to-3D Content Creation, CVPR 2023.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv preprint.
---
Rebuttal Comment 1.1:
Comment: I have read your rebuttal and it partially solves my concerns. I have raised my score to "weak accept".
I suggest that you add the table of success rate and the experiments in the attached pdf into your paper somewhere in the main text or appendix, if accepted.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for your feedback and for considering updating the scoring for the paper. I appreciate your suggestion and will revise our paper accordingly. | Summary: This paper addresses the Janus problem appearing in Score-Distillation-based 3D generation methods, where the most canonical view of an object appears in other views. In particular, the author propose two components for debiasing the score distillation and the prompt used for the generation. First, the authors provide a theoretical discussion over SDS loss and the terms that contribute to the Janus problem and bad generation quality. Therefore, the authors propose gradient clipping of the score gradient in order to tackle the discussed problem. Furthermore, the paper discusses another issue of the existing methods, which is the potential contradiction between tokens of the text prompt with the added view guidance tokens. To prevent the contradiction, the authors propose to exploit the point-wise mutual information (PMI) technique to identify and remove the contradictions from the text prompt. The proposed method evaluated quantitatively and qualitatively and compared to existing baselines. In addition, an ablation study on different proposed components is provided.
Strengths: -- The paper is well-written.
-- The provided discussion on the bias of score distillation loss and view-augmented prompts is valuable
-- The novelty of the proposed components is sufficient.
-- Based on the provided results, the proposed method seems effective in alleviating the Janus problem.
Weaknesses: -- The efficacy of the proposed method is still highly limited by the bias of diffusion models. (although this is an issue specific to the proposed method).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -- I am wondering how much the proposed gradient clipping affects the convergence speed of the optimization. Is there any considerable difference with the baselines regarding the optimization steps?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and the societal impact of the work has been discussed in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Bias of diffusion models.** This relates to the limitations of our proposed method including other approaches, specifically regarding the bias of diffusion models. As we mentioned in our paper, if the 2D prior generates a severely deformed 3D field in the baseline, our method might not be able to fully alleviate it. However, it should be noted that our method is intended for debiasing to better distill the 2D diffusion score, not for directly incorporating an additional 3D geometry prior. Nonetheless, our method can be used universally as diffusion models and score-distilling text-to-3D frameworks evolve in the future, acting as an effective plug-and-play method. Ambitiously, we hope that our work provides a stepping stone towards mitigating these biases and combining a 3D prior with those 2D score-based models in future research.
**Convergence speed.** Regarding the question about the impact of gradient clipping on convergence speed, we assure the reviewer that the convergence speed is nearly unchanged by our approach (approximately 20 minutes for SJC and ours). Our experiments showed that the number of optimization steps is comparable to that of the baseline models. With the same number of steps, we observed that the content is maintained while being debiased, with fewer artifacts (figures in the main paper and the supplementary material are always optimized with $10,000$ steps). Additionally, in the general response, we display Fig. 4 in the PDF file to show what the rendered image at each step looks like as the scene undergoes optimization. The 3D scenes for both the baseline and ours evolve similarly in terms of optimization steps, with ours being debiased. Besides, this experiment also underscores the motivation for dynamic clipping of 2D-to-3D scores, as the geometry is determined in the early stages.
**Applicability.** Our method is widely applicable to a variety of concurrent methods that use score distillation for text-to-3D models, including the very recent work on variational score distillation (VSD) in ProlificDreamer [Wang et al., 2023]. In general response, we have provided a PDF file including results which demonstrate the applicability of our method to recent frameworks such as Magic3D [Lin et al., 2023] (Fig. 3), DreamFusion [Poole et al., 2023] (Fig. 2), and ProlificDreamer (Fig. 1). We kindly direct the reviewer to the general response for more information.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023.
[Lin et al., 2023] Magic3D: High-Resolution Text-to-3D Content Creation, CVPR 2023.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv preprint.
---
Rebuttal Comment 1.1:
Title: Final Comment
Comment: I thank the authors for their response and answering my question. I keep my initial score.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We are grateful for your feedback and will further refine our paper. Thank you! | Summary: This paper proposes debiased score sampling (D-SDS) for improving 2D diffusion-based 3D generation, targeting at the Janus problem. The method is composed of two parts: (i) score debiasing that cuts off scores from 2D diffusion and (ii) prompt debiasing that fixes the discrepancy between view prompts and object orientation. Experiments have been conducted from the baseline method SJC [Wang et al., 2023], where impressive improvements have been shown, especially in solving the Janus problem. A detailed ablation study has shown the value of each debiasing design.
[Wang et al., 2023] Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation. In CVPR.
Strengths: - This work is well-motivated, targeting a relevant problem of the Janus problem in 3D generation. The discussions on the problem and technical derivations are solid, insightful, and interesting. Besides, the paper is well-written.
- The proposed method is novel. I like the idea of debiasing scores and prompts, which makes good sense to me. Besides, the proposed method can be easily extended to other 2D diffusion-based methods.
- Experiments are well-designed and presented. Qualitative, quantitative, and user study experiments are shown, and the results demonstrate the effectiveness of solving the Janus problem.
Weaknesses: - Somewhat not essential. The proposed method is simple and effective. However, it seems the method is only applicable to these SJC/DreamFusion 2D diffusion score-based methods. This is not a fundamental solution for the Janus problem but is kind of like a temporary effective trick that alleviates problems.
- Lacking complex examples. The presented results use relative prompts mainly for simple 3D objects. What about more complex 3D structures like the "Temple of Heaven" shown in SJC? Since one of the main difficulties of text-to-3D generation is for these more comprehensive geometries, it is important to show more of these cases. Is it another limitation of the proposed method?
- Limited baseline. The proposed method is currently conducted to improve the SJC baseline. What about other methods? Is it possible to be applied to the more recent work ProlificDreamer [Wang et al., 2023]?
- Minor suggestion: It seems that the Janus problem in 3D generation was first pointed out and termed for DreamFusion by Ben Poole before on [social media](https://twitter.com/poolio/status/1578045212236034048?s=20), it would be better to cite DreamFusion when first listing this issue in the paper. But it is okay if it is not cited for this term.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation. arXiv preprint.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion. In ICLR.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have no other questions besides those listed before. I am happy to increase the score if my concerns are addressed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Since the method is based on SJC, it will strongly bound its performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Applicability.** Our method is widely applicable to a variety of concurrent methods that use score distillation for text-to-3D models, including the very recent work on variational score distillation (VSD) in ProlificDreamer [Wang et al., 2023]. In general response, we have provided a PDF file including results which demonstrate the applicability of our method to recent frameworks such as Magic3D [Lin et al., 2023] (Fig. 3), DreamFusion [Poole et al., 2023] (Fig. 2), and ProlificDreamer (Fig. 1). We kindly direct the reviewer to the general response for more information.
**Complex examples.** The proposed method is designed to be also applicable to complex 3D structures, not just simple ones. In addition, it should be clarified that the "temple of heaven" example the reviewer suggested might not be the most suitable candidate for evaluating the Janus problem due to its radially symmetrical geometry. In fact, many papers showcase this type of prompts because the results are hardly identified as encountering the problem. Rather than this, we are eager to showcase our results with complex geometry that are particularly prone to issues like the Janus problem (e.g., "a majestic gryphon with the body of lion and the wings of an eagle'' or "an elegant teacup with delicate floral designs'') using the very recent implementation of threestudio [Guo et al., 2023], which enables us to generate more intricate objects with the ProlificDreamer VSD framework. To demonstrate our method's effectiveness in addressing the Janus problem in complex geometries, we present generation results in Fig. 1 in the general response and compare them with baseline models.
**Baselines.** While we agree that applying our method to other models such as ProlificDreamer would further demonstrate its versatility, we selected SJC [Wang et al., 2023] as our primary baseline because it was the state-of-the-art text-to-3D framework available when we submitted our work. Actually, ProlificDreamer was published on arXiv after the NeurIPS paper submission deadline, and they even mentioned the debiasing methods in their appendix as orthogonal methods that can be used in conjunction with their variational score distillation approach.
**Citation.** We appreciate the reviewer's suggestion to credit Ben Poole for introducing the Janus problem in social media. We will revise the manuscript to include the citation for DreamFusion.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023.
[Wang et al., 2023] Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, CVPR 2023.
[Lin et al., 2023] Magic3D: High-Resolution Text-to-3D Content Creation, CVPR 2023.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv preprint.
[Guo et al., 2023] threestudio: A unified framework for 3D content generation.
---
Rebuttal Comment 1.1:
Title: Post Rebuttal Comment
Comment: I highly appreciate the authors for the detailed response, which I suggest adding to the main paper appendix in the final version. My concerns are largely solved.
Minor suggestion: It would be better to include some quantitative comparisons for methods like ProlificDreamer and Magic3D, if possible.
---
Reply to Comment 1.1.1:
Title: Response
Comment: We're pleased that your concerns have been largely resolved! We appreciate your comments and will revise our paper accordingly, hopefully including your minor suggestion. | Rebuttal 1:
Rebuttal: **General response.** We deeply appreciate the insightful feedback provided on our manuscript and have thoroughly examined all comments. The reviewers recognized the critical nature of the Janus problem in text-to-3D generation, underscoring the importance of our contribution to the field. The reviewers acknowledged the strong motivation behind our research, emphasizing the relevance of the Janus problem within text-to-3D generation (8eVk, nz3q). They also praised the robustness of our discussions, technical insights, and the manuscript's lucidity (8eVk, gQ7C, 9gp3). Our innovative methodology has been spotlighted, especially the debiasing of scores and prompts approach, which has been recognized for its originality and potential adaptability to 2D diffusion-based methods (8eVk). In addition, our comprehensive experiments, covering qualitative, quantitative, and user studies, is a testament to our solution's efficacy in tackling the Janus problem (8eVk, gQ7C).
**Applicability.** Our method is designed to enhance 2D score-based text-to-3D generation methods. While it has been mainly claimed to be applicable to the SJC [Wang et al., 2023] and DreamFusion [Poole et al., 2023] frameworks, the applicability of our approach goes beyond these models. It can be adapted for any text-to-3D generation methods that leverage a score from a text-to-image diffusion model and use view-augmented prompting, which are the dominant approaches in current text-to-3D methods using the scores of 2D diffusion models [Chen et al., 2023; Poole et al., 2023; Wang et al., 2023; Lin et al., 2023; Seo et al., 2023]. These methods, including Magic3D [Lin et al., 2023] and ProlificDreamer [Wang et al., 2023], are to some extent susceptible to the Janus problem. We believe this context renders our work a rather novel solution that tackles the prevailing Janus problem. Thanks to the recent implementation of the text-to-3D frameworks [Guo et al., 2023], we have provided a PDF file including results that demonstrate the applicability of our method to recent frameworks such as Magic3D (Fig. 3), DreamFusion (Fig. 2), and ProlificDreamer (Fig. 1). We hope that concerns about the applicability are resolved when looking at the results in the file.
[Poole et al., 2023] DreamFusion: Text-to-3D using 2D Diffusion, ICLR 2023.
[Wang et al., 2023] Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation, CVPR 2023.
[Lin et al., 2023] Magic3D: High-Resolution Text-to-3D Content Creation, CVPR 2023.
[Wang et al., 2023] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation, arXiv preprint.
[Seo et al., 2023] Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation, arXiv preprint.
[Chen et al., 2023] Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation, ICCV 2023.
[Guo et al., 2023] threestudio: A unified framework for 3D content generation.
Pdf: /pdf/00f787a821449a7cbe001e18e4e491567ef99416.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
From ViT Features to Training-free Video Object Segmentation via Streaming-data Mixture Models | Accept (poster) | Summary: This paper proposes a novel method for semi-supervised video object segmentation. The method combines pre-trained deep features from still images with streaming-data clustering techniques to model the object and the background as dynamic ensembles of von Mises-Fisher mixtures. The method does not require any additional training or fine-tuning on videos, and has a low memory footprint by storing only cluster-level information. The method also incorporates spatial coherence, outlier rejection, and convolutional conditional random fields to improve the segmentation quality. The paper demonstrates that the method achieves state-of-the-art results on two challenging benchmarks: DAVIS-2017 and YouTube-VOS 2018.
Strengths: The paper shows that using pre-trained features from still images can eliminate the need for costly and supervised training on videos, while using streaming-data clustering can adapt to temporal changes and reduce memory consumption.
1. The idea of using vMF distribution to model the changes of features in stream data is interesting.
2. The memory-conscious strategy opens possibilities for processing long videos, while the current solution is limited to videos with only about one hundred frames.
Weaknesses: 1. Almost all of the methods compared in this work target correspondence learning, which can not only handle VOS but also pose/object tracking. Thus it is reasonable that this work specifically tailored for VOS will achieve SOTA performance.
2. Though no training video is required, the backbone has to be pre-trained on million static images which is much larger than and has more diverse scenes than YouTube-VOS or Kinetics.
3. The improvement of performance on YouTube-VOS 2018 is not as considerable as that on DAVIS17 val. Moreover, it is even inferior to [1] on YouTube-VOS 2018 (i.e., 71.5 v.s. 72.4) with a stronger backbone (i.e., XCiT-small v.s. ResNet-50).
4. How about the inference speed of the proposed method compared to existing work? Will multiple vMF mixture models impose a tremendous burden on inference?
5. Please check the bib carefully, e.g. [7] and [8] are the same.
[1]. Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation. CVPR23
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation has been well discussed in Supplemental Material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful, useful, and positive review.
- Our claim is for SOTA results only on the VOS task, and we make no claims to solve other tasks. However, the methods we cited and compared to are the current SOTA (which we beat…) in VOS, so we had to compare with them even if they can also solve non-VOS tasks.
- The uniqueness of our approach stems from its capability to understand video dynamics without any exposure to video sequences during training. This is a non-trivial task as it necessitates the model to contend with video-specific attributes such as occlusion, camera movement, and changes in object appearances, which are absent in static images. Therefore, the proficiency to learn from static images and apply this knowledge effectively to video sequences enhances the novelty and complexity of our method.
- The evaluation of our method on the YouTube-VOS 2018 dataset yielded a score of 71.5, slightly below [1]'s score of 72.4. It's significant to note that [1] utilized the same self-supervised pretraining scheme as [2], followed by additional specialized training on this dataset, whereas the pre-trained features used by our method were trained using only static images. However, this specialized approach by [1] didn't result in superior performance on the DAVIS2017 validation and test sets. Moreover, [1] demonstrates limitations in frame rate and an inability to utilize higher-resolution features, as discernible from their pseudo code. In contrast, our method's performance underscores its more robust adaptability across different datasets and superior handling of high-resolution features, even when solely employing static images during training.
Furthermore, when we examine the performance of our method against INO, which employs a backbone of similar strength to ours (ViT B/8), our method still delivers superior results. This direct comparison demonstrates that our results aren't solely a product of the powerful backbone.
- Performance Comparison in Frames Per Second (FPS):
| Resolution | Feature Dimension | Ours | INO | DULVS |
|------------------|-------------------|-------|------|-------|
| 1/4 (120 x 210) | 384 | 8.25 | 0.13 | 0.14 |
| | 768 | 6.50 | 0.11 | 0.12 |
| | 1152 | 5.25 | 0.10 | 0.11 |
| 1/2 (240 x 420) | 384 | 1.55 | OOM | OOM |
| | 768 | 1.28 | OOM | OOM |
| | 1152 | 1.10 | OOM | OOM |
_OOM: Out of Memory_
We evaluated three models using feature maps at resolutions of 1/4 and 1/2 of the image size (480 x 840) across three feature dimensions (384, 768, 1152). This comparison involved computing the frames per second (FPS) after the first 21 frames, exclusive of the feature extraction step. Our model consistently outperformed the others, exhibiting particular strength in higher resolutions where competing models faced out-of-memory (OOM) issues.
(Tested using Tesla V100-32GB)
- [1] Li, Liulei, et al. "Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
- [2] Caron, Mathilde, et al. "Emerging properties in self-supervised vision transformers." Proceedings of the IEEE/CVF international conference on computer vision. 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, all of my concerns have been addressed. | Summary: This paper tackles the semi-supervised video object segmentation problem. It presents a method that relies on clustering features from a pre-trained ViT model. The presented method has a low memory footprint and does not need any additional training. It shows SOTA results on DAVIS-2017 and YouTube-VOS 2018 validation datasets.
Strengths: The presented algorithm is the first to show a low memory footprint and indeed can scale easily.
The paper presents a comprehensive study of the different components of the algorithm and an interesting ablation that shows the necessity of the different components.
The results of the paper are both visually and numerically impressive.
Weaknesses: While the algorithm in the paper results in impressive results, it is hard to follow all the notations and the optimization objectives, mostly due to the nested indexing. While I was able to follow it eventually, I suggest rewriting it (one possibility is to start with a simple 2 classes foreground and background case - and extend it later to N classes).
One element that is not ablated in this paper, is the quality of the pre-trained model and the features that are used. To show that this algorithm can improve and produce better results in the future, one should show how the performance of SVOS improves when "better" features are given to it. e.g. does a ViT model that was pre-trained on more data result in better downstream performance? Does a larger model improve the SVOS? Does the algorithm improve with higher resolution features? All of these questions are important for understanding if this algorithm will survive the "test of time".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Most of my questions are mentioned above.
Another question that I will like to know the answer to is about the time consideration - how much time does it take, as opposed to other methods for SVOS, and how does it scale with the spatial size of the features?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the paper are discussed and strongly relate to the choice of the pre-trained model and its corresponding features, and might be addressed by using a model pre-trained on videos.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful, useful, and positive review.
**Adopting Deeper Networks:**
It is important to note that simply adopting deeper networks doesn't necessarily guarantee superior results. As observed in previous self-supervised learning approaches like VFS [1], performance sometimes remains unchanged or even degrades when using deeper backbones like ResNet-50. It can even introduce additional complications which already exist during training, such as issues with memory and convergence. Thus, our method provides an effective and much-needed solution to a practical problem.
The table below represents an evaluation conducted on the DAVIS-2017 validation set.
| Features | J&F-Mean | J-Mean | J-Recall | F-Mean | F-Recall |
|-------------------------|----------|---------|----------|---------|----------|
| ours+dinov2_small | 0.776 | 0.754 | 0.854 | 0.798 | 0.890 |
| ours+dinov2_base | 0.782 | 0.760 | 0.880 | 0.804 | 0.900 |
| oursx2+dinov2_small | 0.784 | 0.762 | 0.873 | 0.806 | 0.905 |
| oursx2+dinov2_base | 0.803 | 0.780 | 0.885 | 0.825 | 0.916 |
In the experiment, DINOv2's [2] features were concatenated with our original features, a necessity due to DINOv2's large patch size of 14. This fusion provided more effective information utilization. The results showed a marked improvement in J&F-Mean with the "oursx2" configurations (features with higher spatial dimensions). This improvement is attributed to our method's ability to exploit and use higher resolutions, facilitated by XCiT's capability to handle them. A comparison between the DINOv2 versions ("small" vs "base") revealed that the "base" version consistently performed better, underscoring the influence of stronger pre-trained models. Overall, these findings demonstrate that the method's performance can be elevated by using higher resolutions and selecting more advanced pre-trained models, delineating a promising direction for future enhancements.
- [1] Xu, Jiarui, and Xiaolong Wang. "Rethinking self-supervised correspondence learning: A video frame-level similarity perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
- [2] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, all of my concerns have been addressed.
---
Rebuttal 2:
Title: Reminder from AC
Comment: Dear Reviewer
Could you please check the rebuttal, if you have further concerns ?
Best,
AC | Summary: This paper proposes a training free framework for semi-supervised VOS. Here, the features of objects from a pretrained ViT are represented as vMF, where the objects across frames are associated to perform propagation from initial masks. The results are better than previous self-supervised methods that trained on unlabeled videos.
Strengths: The idea of associating objects in vMF space for SVOS is interesting, which is a new insight different from the previous contrastive learning based self-supervised methods. The paper is well organized and sufficient comparison visual results are given.
Weaknesses: -The claimed limitations in SVOS is not true especially for the large memory footprint. Actually, some lightweight designs have been proven in VOS like mobilevos[1*] and AOT[2*]. Here, [1*] focuses a lightweight for real-time vos and [2*] tackles the memory storage issue in STM and the repeated inference for each object id at testing. It's true that the heuristic association method gets rid of the usage of complicated decoders where temporal correspondence and mask generation are performed. However, the large memory is not only from the network parameters but the computational costs. For now, the authors did't give convincing results to demonstrate this contribution like comparisons in terms of params, gflops or fps.
-For the temporal coherent, is it possible to extend to model both inter-object contrast and intra-object consistency, so the method can directly predict multiple instances at once.
--Although the method is training free, many hyperparameters need to be tuned during testing. Figure 5 also demonstrates that the change of hyperparameters would impact the final performance.
[1*] MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation, cvpr23.
[2*] Associating Objects with Transformers for Video Object Segmentation, neurips 21.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: -More details should be illustrated in Table 1 and Table 2, e.g., backbone, resolutions. The proposed method uses ViT while the others always use res18 or res50. It is necessary to demonstrate that the superior results come from the well designed algorithm instead of the usage of more powerful backbone.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors discussed the limitations of their methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful, useful, and overall-positive review.
- **Large memory footprint**. MobileVOS and AOT are supervised methods that indeed accomplish admirable memory management and computational efficiency. However, these achievements are grounded in the utilization of training labels, which streamline label propagation, subsequently reducing computational and memory demands. In contrast, our approach operates within the unsupervised realm, where label propagation is inherently more complex and memory-intensive.
It's also noteworthy that, to our knowledge, no existing unsupervised methods have effectively navigated the memory-intensive challenges associated with label propagation. Our method bridges this gap, providing an innovative solution in the unsupervised paradigm.
**Performance Comparison in Frames Per Second (FPS):**
| Resolution | Feature Dimension | Ours | INO | DULVS |
|------------------|-------------------|-------|------|-------|
| 1/4 (120 x 210) | 384 | 8.25 | 0.13 | 0.14 |
| | 768 | 6.50 | 0.11 | 0.12 |
| | 1152 | 5.25 | 0.10 | 0.11 |
| 1/2 (240 x 420) | 384 | 1.55 | OOM | OOM |
| | 768 | 1.28 | OOM | OOM |
| | 1152 | 1.10 | OOM | OOM |
_OOM: Out of Memory_
We evaluated three models using feature maps at resolutions of 1/4 and 1/2 of the image size (480 x 840) across three feature dimensions (384, 768, 1152). This comparison involved computing the frames per second (FPS) after the first 21 frames, exclusive of the feature extraction step. Our model consistently outperformed the others, exhibiting particular strength in higher resolutions where competing models faced out-of-memory (OOM) issues.
(Tested using Tesla V100-32GB)
- **Increasing the inter-object variability and increasing the intra-object similarity**. Please note that our method does not make any assumption about the object class. Thus, it is not designed for instance segmentation per se. That said, and as demonstrated in many of our videos (please also visit the url provided with the paper which shows many such videos), if the user provides multiple masks in the first frame, then our method successfully segments multiple objects, including different instances of the same class (e.g. see the video with several dogs).
- **Hyperparameters** exist in most machine-learning models. The number of hyperparameters in our method is not higher than, e.g., in a typical pure deep-learning method. In comparison to traditional training-based models, tuning hyperparameters in our approach is in fact decidedly more straightforward. Of note, we used the same values of hyperparamters for both Youtube-VOS and Davis, even though these datasets differ from each other in multiple ways.
- **Backbones and Resolutions:**
| Model | Backbone | Resolution |
|--------|------------|---------------------|
| DINO | ViT B/8 | ⅛ of image resolution |
| LIIR | Resnet-18 | ¼ of image resolution |
| VFS | Resnet-50 | ⅛ of image resolution |
| DULVS | Resnet-18 | ⅛ of image resolution |
| INO | ViT B/8 | ⅛ of image resolution |
In the revised version of our paper, we will ensure these details are included.
- **Adopting Deeper Networks:**
It is important to note that simply adopting deeper networks doesn't necessarily guarantee superior results. As observed in previous self-supervised learning approaches like VFS [1], performance sometimes remains unchanged or even degrades when using deeper backbones like ResNet-50. It can even introduce additional complications which already exist during training, such as issues with memory and convergence. Thus, our method provides an effective and much-needed solution to a practical problem.
To further dispel any concerns about the effectiveness of our algorithm, we direct attention to our comparison with INO, which employs ViT B/8. Despite using the same powerful backbone, our method outperforms INO, thereby demonstrating that our superior results are indeed a product of our well-designed algorithm, and not merely the result of utilizing a more powerful backbone.
- [1] Xu, Jiarui, and Xiaolong Wang. "Rethinking self-supervised correspondence learning: A video frame-level similarity perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I don't have more questions. | Summary: The method tries to combine classical technique for SVOS task namely vMF and CRFs with advance Deep Learning based ViT representations. By doing so, the proposed method eliminates the training requirements on video data. The authors perform comprehensive experimentation to evaluate their approach.
Strengths: The paper is well written and easy to follow. Authors have done a thorough job in providing comprehensive experimental evaluation for their technique. The major strength of this approach is it does not require video training data.
Weaknesses: - The performance of the model decreases when it encounters the unseen examples as can be observed from the table 2. If the proposed model does not require training then why would the performance on unseen examples is lower?
- I'm not sure I understand what Fig. 3 represents? Is it a comparison of memory utilization by proposed method and baselines that require 1, 5, 21 reference frames?
- The claim of low-memory footprint is unclear to me? is the claim made with respect to other baselines? If so, by how much delta is the proposed method better?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - As claimed by authors, their method has a low-memory footprint. How does it compare to the baseline in regard to the speed of performing the VOS over a video? I want to know if there is a trade-off between the memory footprint and performance speed.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As the authors are using pre-trained VITs to extract the features. All the limitations of VITs carry forward to this approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful, useful, and overall-positive review.
- **_"The performance of the model decreases when it encounters unseen examples"_**.
The decrease in performance on unseen examples, as noted in Table 2, does not stem from a need for training, but rather the complexity of these examples. The unseen categories likely comprise more challenging cases. This observation aligns with other unsupervised models not trained on the YouTube-VOS 2018 dataset, where a similar drop in performance occurs. This pattern emphasizes that the challenge lies in the unique difficulty of the unseen examples, rather than a specific limitation of our model.
- **Clarification on Figure 3 and the memory footprint**.
Figure 3 compares memory consumption for the retention of history information between our proposed method and recent works in a similar setting (unsupervised, correspondence based matching), which often need up to 21 reference frames.
The cumulative memory utilization during inference is affected by:
1. **Memory Storage**: Traditional methods store multiple dense feature maps from prior frames, increasing the memory footprint based on the quantity and size of these maps. Our method significantly reduces this by tying memory usage to the number of clusters, which demands far less memory than even a single dense feature map.
2. **Computational Load**: Standard label propagation techniques compute cosine similarity for each feature point in the current frame with all feature points within a certain radius in all reference frames. That approach struggles with scalability, particularly when the feature map resolution increases. The higher the resolution, the larger the radius needed for cosine similarity computation, intensifying the computational challenge. In stark contrast, our method improves on this by computing similarities relative to clusters, not individual feature points, thus enhancing scalability.
- **_"Trade-off between the memory footprint and performance speed:"_**
**Performance Comparison in Frames Per Second (FPS):**
| Resolution | Feature Dimension | Ours | INO | DULVS |
|------------------|-------------------|-------|------|-------|
| 1/4 (120 x 210) | 384 | 8.25 | 0.13 | 0.14 |
| | 768 | 6.50 | 0.11 | 0.12 |
| | 1152 | 5.25 | 0.10 | 0.11 |
| 1/2 (240 x 420) | 384 | 1.55 | OOM | OOM |
| | 768 | 1.28 | OOM | OOM |
| | 1152 | 1.10 | OOM | OOM |
_OOM: Out of Memory_
We evaluated three models using feature maps at resolutions of 1/4 and 1/2 of the image size (480 x 840) across three feature dimensions (384, 768, 1152). This comparison involved computing the frames per second (FPS) after the first 21 frames, exclusive of the feature extraction step. Our model consistently outperformed the others, exhibiting particular strength in higher resolutions where competing models faced out-of-memory (OOM) issues.
(Tested using Tesla V100-32GB)
---
Rebuttal 2:
Title: Reminder from AC
Comment: Dear Reviewer
Could you please check the rebuttal, if you have further concerns ?
Best,
AC | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful reviews and constructive criticism. We are glad that, overall, the paper was positively received.
Here, in the general response, we touch upon several points common to more than one reviewer. Below, we reply to each reviewer separately.
### **Memory Utilization and Computational Load:**
The cumulative memory utilization during inference is affected by:
- **Memory Storage**: Traditional methods store multiple dense feature maps from prior frames, increasing the memory footprint based on the quantity and size of these maps. Our method significantly reduces this by tying memory usage to the number of clusters (as opposed to the much larger number of features), which demands far less memory than even a single dense feature map.
- **Computational Load**: Standard label propagation techniques compute cosine similarity for each feature point in the current frame with all feature points within a certain radius in all reference frames. That approach struggles with scalability, particularly when the feature map resolution increases. The higher the resolution, the larger the radius needed for cosine similarity computation, intensifying the computational challenge. In stark contrast, our method improves on this by computing similarities relative to the clusters present within a certain radius, not individual feature points, thus enhancing scalability (since the number of clusters within a region is much smaller than the number of features in that region).
### **Performance Comparison in Frames Per Second (FPS):**
| Resolution | Feature Dimension | Ours | INO | DULVS |
|------------------|-------------------|-------|------|-------|
| 1/4 (120 x 210) | 384 | 8.25 | 0.13 | 0.14 |
| | 768 | 6.50 | 0.11 | 0.12 |
| | 1152 | 5.25 | 0.10 | 0.11 |
| 1/2 (240 x 420) | 384 | 1.55 | OOM | OOM |
| | 768 | 1.28 | OOM | OOM |
| | 1152 | 1.10 | OOM | OOM |
_OOM: Out of Memory_
We evaluated three models using feature maps at resolutions of 1/4 and 1/2 of the image size (480 x 840) across three dimensions (384, 768, 1152). This comparison involved computing the frames per second (FPS) after the first 21 frames, exclusive of the feature extraction step. Our model consistently outperformed the others, exhibiting particular strength in higher resolutions where competing models faced out-of-memory (OOM) issues.
(Tested using Tesla V100-32GB)
### **Adopting Deeper Networks:**
It is important to note that simply adopting deeper networks doesn't necessarily guarantee superior results. As observed in previous self-supervised learning approaches like VFS [1], performance sometimes remains unchanged or even degrades when using deeper backbones like ResNet-50. It can even introduce additional complications which already exist during training, such as issues with memory and convergence. Thus, our method provides an effective and much-needed solution to a practical problem.
The table below represents an evaluation conducted on the DAVIS-2017 validation set.
| Features | J&F-Mean | J-Mean | J-Recall | F-Mean | F-Recall |
|-------------------------|----------|---------|----------|---------|----------|
| ours+dinov2_small | 0.776 | 0.754 | 0.854 | 0.798 | 0.890 |
| ours+dinov2_base | 0.782 | 0.760 | 0.880 | 0.804 | 0.900 |
| oursx2+dinov2_small | 0.784 | 0.762 | 0.873 | 0.806 | 0.905 |
| oursx2+dinov2_base | 0.803 | 0.780 | 0.885 | 0.825 | 0.916 |
In the experiment, DINOv2's [2] features were concatenated with our original features, a necessity due to DINOv2's large patch size of 14. This fusion provided more effective information utilization. The results showed a marked improvement in J&F-Mean with the "oursx2" configurations (features with higher spatial dimensions). This improvement is attributed to our method's ability to exploit and use higher resolutions, facilitated by XCiT's capability to handle them. A comparison between the DINOv2 versions ("small" vs "base") revealed that the "base" version consistently performed better, underscoring the influence of stronger pre-trained models. Overall, these findings demonstrate that the method's performance can be elevated by using higher resolutions and selecting more advanced pre-trained models, delineating a promising direction for future enhancements.
### **References:**
- [1] Xu, Jiarui, and Xiaolong Wang. "Rethinking self-supervised correspondence learning: A video frame-level similarity perspective." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
- [2] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023). | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper essentially shows, that a combination of classic techniques such as stream-data clustering, using an EM-algorithm and dynamic updates, in combination with strong pre-trained features, allows to achieve state-of-the-art performance on two standard video segmentation datasets (Davis 2017 and YouTube-VOS-2018).
Strengths: Clearly achieving good performance with a combination of classic techniques, chosen well for the task at hand, is interesting to report and know about. Performance is strong on DAVIS (probably the dataset where most parameters are set) and somewhat less impressive on YouTube - but still sota
Weaknesses: There is essentially no novelty in the paper - except to propose a well-chosen combination of classic techniques.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper is well written and the performance is interesting to know about. I suppose the key question to me is essentially, how surprising the results are? If I had been asked prior to reading this paper if a sensible combination of stream-based clustering with strong pretrained features can do the job on datasets such as DAVIS and YouTube (these are not particularly difficult after all) - I would have clearly set yes. In that sense the surprise and novel insight to me for this work is rather marginal. Having said this, I would assume that not everyone has the same intuitions and thus it is imho worthwhile to report the paper in some form as a publication. However, in my personal judgement the current paper and contribution seems below the bar for such a high-profile venue such as NeurIPS.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: ok for me
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful and useful review.
We also thank the reviewer for recognizing the strong performance of our method on both the DAVIS and YouTube datasets, as well as appreciating the integration of classic techniques.
We would like to underscore the unique contribution that our work makes, which goes far beyond combining and adapting established techniques. The success of our method lies in the precision with which the techniques are adapted, tailored, and validated to deliver state-of-the-art results on benchmarks that continue to be challenging. The fact that after reading our paper the reviewer feels that the approach is logical and that it makes sense that it works, does not contradict the fact that no one did it before and does not dispute the novelty. Moreover, the details are crucial and many of our judicious choices are far from being obvious (e.g., opting to go with an ensemble of mixtures instead of model selection or nonparametric clustering techniques) and were always made while taking into account not only performance and principled modeling considerations but also efficiency and scalability. In any case, we argue that if it were so obvious that setting new state-of-the-art results on a key computer-vision task can be done this way, people would have already done it before…
Moreover, we are the first to demonstrate a low memory footprint in this domain, an essential quality in real-world applications. Our ability to easily scale to higher resolution features, while relying solely on static images during training, represents a breakthrough in unsupervised video object segmentation. This innovation not only builds on classic techniques but extends them in a way that could redefine best practices in the field.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. After reading all reviews and the rebuttals I essentially stand by my initial assessment. I can see that others are more positive and thus the paper will likely be accepted and I will not make the case to argue strongly against acceptance.
As said, I find it personally below the bar for NeurIPS and that is still the case. | null | null | null | null | null | null |
MKOR: Momentum-Enabled Kronecker-Factor-Based Optimizer Using Rank-1 Updates | Accept (poster) | Summary: This work introduces a new optimization algorithm for deep neural networks. Building upon the baseline Kronecker-factored curvature (K-FAC) algorithm, this new approach approximates each activation and gradient covariance matrix within K-FAC as a rank-one matrix. This approximation enables the efficient calculation of the inverse Hessian. By leveraging this enhanced efficiency, the algorithm allows for more frequent updates of the Hessian. Experimental results demonstrate great speed improvements compared to the state-of-the-art first and second order optimization algorithms.
Strengths: 1. Efficient second-order optimization algorithm is important for the training of DNNs.
2. Experimental results show great speed-ups compared to the baseline algorithms.
3. The paper is in general well-written, and the main contribution is clear.
Weaknesses: One notable weakness of this research lies in the insufficient justification of the proposed technique, which involves applying rank-1 approximation to the covariance matrices used in K-FAC. While the work addresses the efficiency benefits of this method, it fails to thoroughly discuss the aspect of why employing rank-1 approximation does not impact the final accuracy or convergence of the algorithm. The following questions should be addressed to strengthen the research: What specific scenarios allow for accurate rank-1 approximation, and when might it be ineffective? Does the effectiveness of the approximation depend on the singular value decays of the covariance matrices? Are there any illustrative toy examples that can shed light on these considerations?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: n/a
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: In the below and the attached PDF figure-file we show that the covariance matrices in training neural networks can be approximated by rank-1 matrices both empirically and theoretically. We will include these discussions in our paper to address the reviewers’ concerns.
**Experimental Results:** As shown in figure 2 in the attached PDF figure-file, our experiments show that the covariance matrices can be approximated with rank-1 matrices with low error and higher rank approximations are unnecessary in practice. Figure 2 (in the attached PDF figure-file) shows the error distribution of the optimal rank-1 approximation methods of the covariance matrices in ResNet-50 and BERT-Large-Uncased pretraining. Our extensive tests on well-known benchmarks (shown in the paper) show this property holds for all the models and we have not come across a benchmark that does not have low-rank covariance matrices.
**Decaying Eigenvalues:** Furthermore, figure 2.e in the attached PDF figure-file shows that the eigenvalues of the factors will decay as the model converges, making rank-1 approximations more effective. The reason behind the decay in the eigenvalues is that the weights are initialized randomly and the neurons are working independently in the beginning of the training, but as the model converges, the neurons will be more dependent on each other and hence the activation and input gradients will become linearly dependent. This also is reflected by some large error values in the distributions in figure 3[a, b, c, d] in the attached PDF figure-file. The factors in MKOR are initialized with identity, starting MKOR from a first-order method. As a result, MKOR is more robust against noise in the approximations in the first iterations (the approximation error does not noticeably affect the factors when replacing ${L_{t-1}^{m-1}}^{-1}$ and ${R_{t-1}^{m}}^{-1}$ in equations 5 and 6 with identity). But as the model converges, the factors in MKOR will be mostly shaped by the training samples, making MKOR more reliable on less erroneous approximations, and the decaying eigenvalues of the factors help MKOR with that.
**Analysis:** Small batch sizes and overparameterization of the networks will lead to low-rank covariance matrices in deep neural networks. Let’s consider the covariance matrix $C=XX^T$, where $C \in R^{d \times d}$ is the covariance matrix and $X \in R^{d \times b}$ is a matrix in which each column corresponds to a single sample and $d$ and $b$ are the sample dimension and the per-GPU batch size respectively. Rank of the covariance matrix is $min(b, d)$. In case the per-GPU batch sizes are small, the covariance matrices in each GPU will be significantly low-rank, and rank-1 approximation methods can work well in those scenarios. In case the batch sizes in each GPU are large, we observe that the covariance matrices will stay low-rank. The underlying reason for this observation is that current neural networks are over-parameterized, and as a result, different features in the covariance matrices of the activations and the output gradients won’t be linearly independent, resulting in low-rank covariance matrices.
**Extending MKOR to Higher Ranks:** Furthermore, one can extend MKOR to use higher-rank covariance matrices. Let’s assume that $C = \sum_{i=1}^{r}{c_i c_i^T}$ where $r$ is the rank of the covariance matrix $C$. We can apply SMW identity to compute $C_1^{new} = (C^{old} + c_1 c_1^T)^{-1}$ with $O(d^2)$ computational complexity. Then we can compute $C_2^{new} = (C_1^{new} + c_2 c_2^T)^{-1}$ using SMW identity with $O(d^2)$ computational complexity. We can continue the same pattern by computing $C_i^{new} = (C_{i-1}^{new} + c_i c_i^T)^{-1}$. The total computation complexity of this process will be $O(rd^2)$. We should add this cost to the cost of computing the low-rank approximation of $C$ which requires an SVD. Using SVD kills the main advantage of using low-rank computations, since the computational complexity of applying SVD is the same as inverting the factors directly. We couldn’t find any cheaper way to compute low-rank approximations of the covariance matrices, except for the rank-1 approximation used in this paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, it would be good to add the discussions above to the paper. I will raise my score. | Summary: MKOR introduces Kronecker rank-1 representation of covariance for second-order optimizers. Authors try to solve the complexity of large language models in second-order optimizers while the Fisher information approximation is computed using rank-1 Kronecker matrix factorization. They also propose a hybrid method to cover the first-order method. Their evaluation of large language models is fair and sound.
Strengths: The main idea behind this research is simple but solves the important problem of the applicability of second-order optimizers for large language models. The paper is well-motivated and experiments are sound and relevant.
Weaknesses: - Authors only argue why rank-1 is proposed from a practical standpoint but there is no theoretical justification to support why rank-1 is enough for Fisher information approximation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Rank1 approximation might be restrictive to approximate the Fisher information properly. I wonder in some cases an extension to higher ranks is necessary. I wonder if an extension towards SeKron [1] is natural.
[1] SeKron: A Decomposition Method Supporting Many Factorization Structures arXiv:2210.06299
- Does adding a learning rate scalar in Equation (2) make the method more flexible to converge?
- How well does the second-order MKOR method compare with the first-order with the same training budget in terms of time and memory?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The numerical stability requires SVD decomposition and matrix inversion which are expensive to compute.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Higher Rank Approximations:**
As shown in figure 2 in the attached PDF figure-file, our experiments show that the covariance matrices can be approximated with rank-1 matrices with low error and higher rank approximations are unnecessary in practice. Figure 2 (in the attached PDF figure-file) shows the error distribution of the optimal rank-1 approximation methods of the covariance matrices in ResNet-50 and BERT-Large-Uncased pretraining. Our extensive tests on well-known benchmarks (shown in the paper) show this property holds for all the models and we have not come across a benchmark that does not have low-rank covariance matrices.
*Sekron and other approximation methods:* MKOR reduces the factor inversion costs in second-order methods by combining SMW identity with low-weight rank-1 approximation methods. If we use any other method except for rank-1 approximations, e.g. Kronecker approximation discussed in SeKron, we need to find proper formulations to incorporate them in the matrix inversion. To our knowledge, no such cheap formulations exist that can be used for inverting matrices.
It might be possible to extend MKOR to use higher-rank covariance matrices however it will introduce significant computational overheads to MKOR since it requires computing the SVD of the factors. Let’s assume that $C = \sum_{i=1}^{r}{c_i c_i^T}$ where $r$ is the rank of the covariance matrix $C$. We can apply SMW identity to compute $C_1^{new} = (C^{old} + c_1 c_1^T)^{-1}$ with $O(d^2)$ computational complexity. Then we can compute $C_2^{new} = (C_1^{new} + c_2 c_2^T)^{-1}$ using SMW identity with $O(d^2)$ computational complexity. We can continue the same pattern by computing $C_i^{new} = (C_{i-1}^{new} + c_i c_i^T)^{-1}$. The total computation complexity of this process will be $O(rd^2)$. We should add this cost to the cost of computing the low-rank approximation of $C$ which requires an SVD. Using SVD kills the main advantage of using low-rank computations, since the computational complexity of applying SVD is the same as inverting the factors directly. We couldn’t find a cheaper way to compute low-rank approximations of the covariance matrices, except for the rank-1 approximation used in this paper.
**Learning Rate in MKOR:**
Thanks to the reviewer’s note, we realized a typo in equation 2. The learning rate is indeed accounted for in our implementation of equation 2. Equation 2 should be $W^m := W^m - \alpha {(L_t^m)}^{-1} \nabla_{W^m} \mathcal{L} {(R_t^m)}^{-1}$ where $\alpha$ is the learning rate. We will fix this typo in the final version of the paper.
**Comparison to First-Order Methods:**
As shown in figure 2 and 6 in the paper (our appendix), MKOR-H converges 2.57$\times$ faster than LAMB (the state-of-the-art first-order optimizer for pretraining BERT-Large-Uncased) and 1.49$\times$ faster than SGD (the state-of-the-art first-order optimizer for training ResNet-50 on ImageNet). So given the same time budget, MKOR converges faster than other first-order methods. The memory overheads of MKOR in comparison to other optimizers are also reported in table 3 in the attached PDF figure-file. It can be observed that all the second-order methods have significant memory overheads to the first-order methods, but MKOR reduces the overhead of KFAC/KAISA up to 1.5$\times$. We will magnify these in the final version.
**SVD in Low-Rank Approximations:**
Instead of using SVD methods that suffer from numerical instability and high computational overheads, MKOR uses a low-weight averaging method to compute rank-1 approximations (lines 2 and 3 in Algorithm 1 and lines 106 to 109 in the paper text). Hence, MKOR is SVD-free and its computational and memory overheads are negligible.
---
Rebuttal 2:
Title: Rating revised
Comment: I would like to thank the authors for their convincing rebuttal. I raise my rating accordingly. | Summary: The work introduces a second-order optimizer that uses rank 1 covariance activation and gradient statistics, and efficient inversion algorithms to accelerate an approximated estimate of 2nd order information. Another addition is the rescaling of gradients and norm-based stabilization, which help to stabilize the entire optimization procedure. This procedure yields a highly stable 2nd order optimizer that is both memory (communication) and computationally efficient and leads to improved convergence times compared to baselines.
Strengths: From the work its clear that the method depends on many independent factors. While some reviewers might see this as a weakness, I see this is a strength. Developing efficient 2nd order optimizer is a highly technical and difficult undertaking and this work provides multiple components that make a stable optimizer possible that is both memory and computationally efficient.
The evaluation could have been extended to regular language modeling since BERT pretraining has become less academically relevant. Nevertheless, a BERT large pretraining is a rather convincing experimental setup. As such, the experimental robustness is also a strength of the paper.
Weaknesses: I think additional experiments for causal language modeling would have been beneficial since this would be the main advantage for more efficient training (BERT pretraining is fast enough for most researchers, but most researchers are unable to pretrain LLMs due to their computational cost).
While this additional experiment would greatly improve the paper, the experimental methodology in this paper is already relatively robust.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: No questions in particular.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: MKOR is currently the most efficient second-order optimizer that can be used on LLMs and CNNs. As the reviewer has pointed out, similar to most state-of-the-art optimizers, MKOR has hyperparameters that users have to tune to use it properly. We have tried to automate as many of these tunings as possible, e.g. the use of norm-based stabilizer for automating stabilization or the stabilization frequency for switching between first- and second-order methods in MKOR-H or knee-point learning rate scheduler. We are optimistic on MKOR’s performance on other LLMs however were not able to try other models due to their computational and resource costs. | Summary: The paper proposes a second-order optimizer named MKOR based on K-FAC. Compared with K-FAC, MKOR uses a rank-1 update to construct Kronecker factors enabling the inverse of Kronecker factors to be efficiently computed via the Sherman-Morrison-Woodbury (SMW) identity. In addition, the rank-1 update of factors in MKOR also helps reduce the communication traffic in distributed training with data parallel for aggregating factors. Experiments on the BERT-Large-Uncased model with the GLUE dataset show that MKOR outperforms first-order LAMB and second-order KAISA.
Strengths: 1. An implicit inverse computation method with the SMW identity in computing the inverse preconditioner factors and a formula to control the balance of exploitation between the first-order and second-order information.
2. A hybrid method of MKOR with the first-order method, MKOR-H, which allows switchable from MKOR to the first-order optimizer in the later of training.
Weaknesses: 1. An important baseline, Eva [A], which is extremely similar to MKOR, is missing.
2. Though MKOR-H seems to achieve some improvement, when switching MKOR to the first-order optimizer is unclear.
3. The comparison between MKOR and K-FAC seems to be not that fair (e.g., Figure 4(b)).
[A] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation, ICLR 2023.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The key idea of using rank-1 updates to replace the original Kronecker factors in K-FAC has been proposed in Eva [A]. In [A], the authors provide a better derivation of the preconditioner without storing the factor matrices so that they use Kronecker vectors (KVs) to calculate the inverse of KVs using the SMW formula, which is much better than MKOR as MKOR needs to store the matrix form of factors (i.e., $L_t^m$ and $R_t^m$). The similarity between MKOR and Eva and the advantage of Eva over MKOR make the originality of this work weak. It is suggested to clarify the differences between MKOR and Eva, and highlight some advantages of MKOR over Eva.
2. The main novel idea in MKOR is the balance between second-order and first-order information in updating $L_t^m$ and $R_t^m$ using Eq. (7) and (8) by controlling $\zeta$. However, how to choose $\zeta$ (or dynamically changing $\zeta$ to switch MKOR to SGD just like MKOR-H) is very important. The paper doesn’t provide such an analysis of when and how to use $\zeta$ though some results show that it can help improve convergence.
3. Similarly to the previous problem, MKOR-H is a combination between first-order optimizer and MKOR according to the loss during training. However, how to measure whether the loss is changed is unclear (e.g., results in Figure 2 haven’t such information either) and when to switch to the first-order optimizer is unclear.
4. What are the update frequencies of the second-order information for MKOR and K-FAC in Figure 2 and Table 3? And how do you choose such frequencies for comparison?
5. In the original paper of KAISA, the update frequency can be set to be very high (e.g., 100-500) for achieving good accuracy on ResNet models with CIFAR10 and ImageNet datasets, is it possible that MKOR could be better than KAISA in these kinds of scenarios?
[A] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation, ICLR 2023.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparison to Eva:**
What the reviewer has pointed out as the strengths of Eva (using KVs to not store the factor matrices), is at the same time a weakness for Eva, leading Eva to being less accurate and less efficient vs. MKOR. Using KVs instead of factor matrices disables the proper use of momentums and imposes the use of damping factors in Eva, both of which damage the accuracy of FIM approximations.
Similar to KFAC, EVA’s formulation involves a damping factor (which helps with numerical stability similar to the left-hand-side of equation 13 in the appendix) that introduces additional error to the FIM approximation, however in MKOR we introduce a new formulation that removes the damping factor by storing the inverse of the previous factors and utilizing the SMW identity (equations 5 and 6 in our paper).
In addition, Eva doesn’t support proper momentum-based training because it doesn’t store the left and right factors of the FIM (but instead averages the KVs in time which does not have theoretical backup). MKOR stores the left and right factors to enable momentum in training, which has been shown to significantly boost the training process (equations 3 and 4 in our paper).
MKOR has two additional contributions that lead to a more stable and efficient implementation of rank-1 updates. (1) It uses a *Norm-Based Stabilizer* and *Gradient Rescaling Mechanism* to detect and avoid exploding gradients. (2) with MKOR-H we combine the high convergence rate of second-order methods in the initial phase of training with the low overhead of first-order methods in the late stages of the training.
Eva comparison Results (please also see the attached PDF figure-file):
* BERT-Large-Uncased: Compared to Eva, MKOR-H is faster in pretraining BERT-Large-Uncased by a factor of 1.69. Its accuracy is also better (an average accuracy of 81.1\% on GLUE and 90.64\% on SQuAD v.1 for MKOR with only 600 iterations vs 80.9\% on GLUE and 90.55\% on SQuAD v.1 for EVA with 1000 iterations), please see Figure 1 and Table 1 and Table 2 in the attached PDF figure-file. Figure 1 in the PDF file shows the convergence results of all the optimizers used in the paper in addition to Eva.
* ResNet-50: We could not reproduce the ResNet-50 results of Eva on ImageNet because the hyperparameters are not reported. We tried to tune Eva on multiple settings and none converged to desired accuracy.
* It is important to note EVA is not comparing results with the most efficient implementation of KFAC. The KFAC version used in EVA is from [1], from 2015. A number of followup work (please see line 41-42 in the paper) have provided faster implementations of KFAC. We use KAISA, the state-of-the-art implementation of KFAC. Also from discussions with KAISA authors and our own experiments the optimal inversion frequency for KFAC is 200, Eva uses an inversion frequency of 50 for KFAC, which makes KFAC slower.
**Dynamic Controlling of $\zeta$:**
Changing the stabilization frequency with a constant $\zeta$ (as done in MKOR) is equivalent to dynamically controlling the value of $\zeta$. In MKOR, instead of changing $\zeta$ dynamically for each iteration, we keep it constant but we change the frequency that we stabilize the factors (equations 7 and 8 in the paper). The frequency is dynamically controlled using a norm-based criteria (lines 5 and 6 in Algorithm 1). Also, if the inversion frequency is higher than a threshold, which is similar to having a small small $\zeta$, we switch to SGD to save on computations.
**Difference of MKOR and MKOR-H in Loss and Switching Criteria:**
MKOR combines the first- and second-order optimization methods using the norm-based stabilization. Per our response to the previous question, the frequency of the stabilization dictates how close MKOR is to first-order methods, i.e. the more frequent the stabilization takes place, the closer MKOR gets to first-order methods. Our switching method looks at the average stabilization frequency in the last few iterations using an exponential moving average, and if the frequency is higher than a threshold and the loss change rate is less than a small ratio of the overall loss reduction, the user will be notified to switch to LAMB (the first-order optimizer) if needed. In the BERT-Large-Uncased pretraining, we switch to LAMB on iteration 300.
**Inversion Frequency**
As reported in section 7.6 in the appendix of the paper, in all our experiments, the frequency (factor reuse time) of inverting the factors in MKOR and MKOR-H is set to every 10 iterations unless stated otherwise (i.e. in Figure 4 in the paper). For KFAC we use the default settings they provide in their papers.
**Infrequent Factor Inversion in KAISA:**
For our experiments, including the ResNet-50 experiment, we used the default settings in the KAISA paper, which has a factor reuse time of 200. This number can be large in KAISA, since the factor inversions in KAISA are computed from scratch every time. In MKOR, which is an approximation-based method, we cannot use very stale factors, since each of our updates only modifies the factors slightly and stale factors won’t be useful anymore.
**Fairness of Figure 4.b Comparisons:**
Figure 4.b motivates the use of more frequent updates in second-order information, and it does not intend to compare different optimizers. There might be some confusion when interpreting Figure 4, particularly due to the absence of MKOR 1000 and KAISA 10. Kaisa 10 is not in the figure because it was very slow in comparison to others (we will add a sentence in the figure caption to note this), MKOR 1000 updates the factors so infrequently with very insignificant rank-1 updates that it performs similar to SGD and won’t have any benefit compared to other second-order methods.
[1] Optimizing neural networks with kronecker-factored approximate curvature, Martens et. al., 2015
[2] KAISA: An Adaptive Second-Order Optimizer Framework for Deep Neural Networks, Paulostki et. al., 2021
---
Rebuttal Comment 1.1:
Title: Rebuttal read
Comment: Acknowledgments to the authors for providing a detailed response and additional experiments. Most of my concerns are addressed so I raise my rating. Hope that these discussions would be included in the final version to make the paper clear. | Rebuttal 1:
Rebuttal: We thank all reviewers for their very informative feedback. We have provided a separate answer to each reviewer and have also attached a PDF file with figures and tables that add to our rebuttal (which we refer to as the attached PDF figure-file in our per-reviewer responses).
Pdf: /pdf/6654d82a21c3a2dfafee9d0594d570c348c84b2b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Interaction Measures, Partition Lattices and Kernel Tests for High-Order Interactions | Accept (poster) | Summary: The authors introduce a hierarchy of $d$-order interaction measures by introducing a family of tests based on factorizations of the joint probability distribution that generalize to any order $d$ and define non-parametric tests to establish the statistical significance of these d-order interactions. They link their approach to lattice theory which they use to reduce the computational complexity of our d-order interaction tests. They validate their method on a synthetic data set and through an application to a neuroimaging dataset.
Strengths: The paper is very well written, covers an exciting topic and seems to introduce very original results that are of broad significance. The tests proposed and their relationship to lattice theory are fascinating and of some practical utility, in my opinion (although I must admit this work is outside my expertise).
Weaknesses: As pointed out by the authors, for the time complexity of computing the Streitberg interaction estimator, the number of terms remains combinatorial. Yet clearly, as the authors show, it is possible to apply it to the HCP data. It would be useful to know the current practical limitations in terms of data size for the method proposed here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors comment more on the current practical limitations?
Can the authors better flesh out the consequences and interpretations of their results for the HCP data, as a means of demonstrating the utility and motivation for their approach?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors could explicitly add a "Limitations" subsection.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and useful suggestions. We appreciate that one must consider both the overall complexity and the practical implementation of our tests. Therefore, we have provided a more in-depth analysis of both.
First, in Table 2 in the main response we provide the overall time taken to compute the full experiments at each order for each brain network. Whilst the overall time to compute the tests increases with interaction order, it was still entirely feasible to run this on standard computer hardware, particularly given that 3-, 4-, and 5-way interactions will be sufficient for most datasets.
Second, datasets often come with different sizes and so we further considered how sample size would affect our ability to identify high-order interactions. In the attached PDF, we Figure 1 shows how the rejection rate of our tests reduces as we decrease the sample size. The Lancaster and Streitberg tests are able to accurately detect higher order interactions with as low as 50-100 samples (depending on the interaction proportion/strength), whilst dHSIC requires substantially more samples.
Finally, we report the big-O notation complexity in Table 1 in the main response for each interaction order. We show that our theoretical improvements, which follow from our mathematical links with lattice theory, reduce the time complexity significantly for $d>3$ interactions in both the Lancaster and Streitberg interaction estimators.
Regarding the HCP data set, we believe there are some interesting interpretations and important implications. If we consider the SOM or VIS, our results show a larger percentage of 2-way interactions compared to regions such as the FPN or DAN, suggesting that there is increased redundancy in the system relative to these other regions. The SOM and VIS are structurally coupled, modular sensorimotor processing regions, that benefit from information redundancy to increase robustness. These findings are in line with recent work reporting higher redundancy in sensorimotor regions [1], although further investigation is needed to draw firm conclusions about brain function from these analyses.
For further investigation, we hierarchically constructed hypergraphs from the identified interactions at different orders and then analysed their structure. By computing the degree assortativity of each hypergraph we found that most regions display a negative degree assortativity that became less negative with increasing order interactions. Additionally, we noticed that the degree assortativities when only considering pairwise connections d=2 are quite similar across regions. However, when including higher-order interactions in the hypergraphs, we find that the degree assortativities of each region diverge from each other. This has important implications on how the RSNs are defined if one considers high-order interactions.
[1] Luppi, A.I., Mediano, P.A.M., Rosas, F.E. et al. A synergistic core for human brain evolution and cognition. Nat Neurosci 25, 771–782 (2022). | Summary: The paper describes a kernel test for estimating d-th order interaction, and computing its significance using permutation test. The authors propose two tests based on Lancaster interaction and Sreitberg interaction. The authors demonstrate the effectiveness of these measures on simulated data with higher order interaction and on real data from fMRI measurements.
Strengths: The paper addresses an interesting and relevant problem since estimating higher order interaction provides invaluable insight into the structure and relations among multivariate systems. The paper is very well written; the authors motivates and describes the problem well, and discusses the proposed estimator in detail.
Weaknesses: The paper only discusses the performance of the proposed measure against a kernel based test of independence rather than existing tests on Lancaster and Streitberg interaction.
In the experimental section it is shown the neuroimage dataset shows higher order interaction within a region than between region. It will e great to get some more insight on, e.g., the implication of higher 4-way interaction that 3-way in SOM and higher 3-way interaction than 4-way interaction in DMN. Similarly, the lack on higher order interactions in LLM despite 2-way interactions being prevalent.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - equation 4, do we need the inequality under the sum given the zeta function?
- equation 5, what is phi_iz
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors address the limitation of the work in section 7, e.g., around computational complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and positive response to our paper.
Regarding the suggestion to compare against non-kernel implementations of Lancaster and Streitberg tests, we are not aware of any such alternative approaches. However, we would be open to exploring alternatives for comparing and validating our implemented tests if the reviewer has suggestions.
Although our focus here was on the theory and the neuroimaging data was presented very concisely, we were also intrigued by the analysis of the this data set and its interpretations. If we consider the SOM or VIS, our results show a larger percentage of 2-way interactions compared to regions such as the FPN or DAN, suggesting that there is increased redundancy in the system relative to these other regions. The SOM and VIS are structurally coupled, modular sensorimotor processing regions, which could benefit from information redundancy to increase robustness. These findings are in line with recent work reporting higher redundancy in sensorimotor regions [1], although further investigation is needed to draw firm conclusions about brain function from these analyses.
The potential for future investigation of the high-order features in neural data is large but this fell beyond the limits of the current paper. As an additional investigation, we hierarchically constructed hypergraphs from the identified interactions at increasing order $d$, by defining the high-order interactions as hyperedges and individual regions as nodes, incrementally including the interactions at a higher order. We then analysed some structural properties of these hypergraphs, and report the degree assortativity of each hypergraph (Fig 3, attached). We found that most regions display a negative degree assortativity that became less negative with increasing order interactions.
Additionally, we noticed that the degree assortativities when only considering pairwise connections d=2 are quite similar across regions. Standard deviations of degree assortativities across RSNs are shown in the table below.
| Degree assortativity | Standard deviation |
|----------------------|--------------------|
| 2-way | 0.044 |
| 3+2-way | 0.161 |
| 4+3+2-way | 0.104 |
| 5+4+3+2-way | 0.059 |
However, when including high-order interactions in the hypergraphs, we find that the degree assortativities of each region diverge from each other. We interpret this as tentative evidence that the structure of macroscopic brain organisation may differ substantially when taking into account interactions beyond pairs, highlighting the importance of methods like ours to reveal new insights from brain data when taking into account high-order interactions that could be of importance for functional processing.
We also thank the reviewer for spotting the type error in Equation 4. You are right that the inequality has been encoded in the zeta function already. We have corrected this in the revised manuscript.
Regarding your question on Equation 5, we are assuming that by $phi\_iz$ you are referring to $\phi^i$? $\phi^i$ is the feature map of variable $X^i$ and $k^i(x, x') = \langle \phi^i (x), \phi^i (x') \rangle$. We apologise for not making this clear in the text and have stated this more clearly in Definition 1 and Definition 2 in the revised manuscript.
[1] Luppi, A.I., Mediano, P.A.M., Rosas, F.E. et al. A synergistic core for human brain evolution and cognition. Nat Neurosci 25, 771–782 (2022).
---
Rebuttal Comment 1.1:
Title: Thank you for your comments
Comment: I would like to thank the authors for the detailed comments and discussion on the potential meaning of the higher order interactions.
Regarding the standard test, I was wondering if one can take the squared distance between the RHS and LHS of Eqs. (2) and (3) over the support of the distributions, e.g., using Parzen estimate or assuming Gaussian distributions, as a test statistic.
Was Fig. 3 attached in the response?
---
Reply to Comment 1.1.1:
Title: Reply to second comment
Comment: We would like to thank the reviewer for the ongoing discussion, and for the additional clarification to their original question regarding standard tests.
Re: standard tests --- As suggested by the reviewer, comparisons to standard approaches to model distributions and ensuing statistical tests would be worthy of further investigation. However, we can already provide a first answer and discussion based on results from the literature.
As already noted, the use of kernel-based methods is predicated on their generality (i.e., fewer assumptions due to their being non-parametric), as well as higher test power and reduced 'curse of dimensionality' for high-dimensional data.
Given the usual lack of knowledge of the underlying distribution in numerous real-world data sets,
imposing assumptions about Gaussianity (or other parametric distributions) can severely limit the applicability of tests and lead to inaccurate conclusions. Indeed, several of our examples are non-Gaussian.
Going beyond such assumptions, a first strategy to circumvent some of these limitations is by fitting a mixture of Gaussians, often implemented through the Expectation-Maximization (EM) algorithm. However, this poses heavy computational challenges of its own as it is a non-convex problem that might converge slowly to local optima, and with intrinsic problems in establishing the number of Gaussians in the mixture. In particular, Ref [1] showed that the approximation with mixtures of Gaussians is slow and inaccurate for high-dimensional problems.
Therefore, alternative non-parametric approaches to model arbitrary distributions with better numerical properties have been developed. One of the most widely used is the Parzen estimate suggested by the reviewer, which is also known as kernel density estimation (KDE). However, detailed numerical comparisons have already shown that KDE is not suitable for high-dimensional data and it typically does not consider the dependence structure among the variables [1].
Specifically, Ref [1] showed that a two-sample test based on KDE using the L2 distance has less power compared with the Maximum Mean Discrepancy formulated by kernel mean embedding (as we do in our paper). Indeed, kernel mean embeddings do not suffer from the curse of dimensionality as the rate of convergence of the empirical kernel mean embedding to the true kernel mean embedding of the underlying distribution is independent of the dimensionality. One can (with high probability) obtain an approximation within
$\mathcal{O}(n^{-1/2})$ of the true kernel embedding based on a finite sample of size $n$.
In summary, kernel-based methods exhibit substantially improved computational and theoretical properties relative to KDE and other non-parametric methods, as well as avoiding any assumptions (e.g., Gaussianity) about the underlying distributions.
We thank the reviewer for pointing out this comparison and, if accepted, we will include this discussion in our revised manuscript.
[1] Song, Le, Kenji Fukumizu, and Arthur Gretton. "Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models." IEEE Signal Processing Magazine 30.4 (2013): 98-111.
Re: Figure 3 --- Apologies for not being clear: Figure 3 referred to the attached pdf with additional figures that have been produced to support this review process. The PDF is located in the general response at the top. You can find the link right below the tables. | Summary: The authors conduct a very systematic study of tests that measure couplings between variables where these couplings include higher-order interactions. Authors uncover connections with lattice theory that then leverage them to formulate these tests more efficiently and in a more interpretable way. The authors demonstrate empirically that one of these tests, the Lancaster test, is better than dHSIC to test for joint and marginal independence and that the Streitberg test is overall superior to the other tests when it comes to detecting all the factorizations of the joint distribution. Finally, the authors validate their results numerically with synthetic data and brain data.
Strengths: - **originality**: The authors provide novel theoretical connections with lattice theory, which help derive the interaction measures and their statistical tests. These contributions are novel to the best of my knowledge and are noteworthy.
- **quality**: The theoretical derivations seem sound and there are no obvious errors I could identify. The theoretical results are validated with a sufficient amount of simulated and real-world data.
- **clarity**: the paper is very clearly written.
Weaknesses: - **significance**: the significance of the approach is hindered by the fact that it is, as claimed by the authors, "computationally expensive". Additionally, the authors themselves also point out that the theoretical results rely on the assumption of iid data, which limits the applicability of the method to real data.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No questions for the authors.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed adequately all the limitations and negative impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and comments. The theoretical results in our paper leveraging lattice theory enable us to define and implement kernel-based $d$-order Lancaster and Streitberg interaction tests, whose computational complexity was previously prohibitive, thus opening up their practical use for real-world applications.
The original time complexity for computing the Streitberg interaction estimator is $\mathcal{O}(B_d^2 n^{2d})$ where $B_d$ is the Bell number which represent the number of partitions in a set of cardinality $d$ and, consequently, $B_d^2$ represent the number of terms in the Streitberg interaction estimator. For fixed $n$, our lattice formulation allows us to reduce the number of terms in the mixed cumulant operator from $B_d$ to $F_d$ where $F_d$ is the number of partitions without singletons, equivalent to centring. In Table 3 in the main response, we illustrate the reduction in the number of terms before and after eliminating the partitions with singletons. After this reduction, the complexity becomes $\mathcal{O}(F_d^2 n^{2d})$. For fixed $d$, we can further optimise the time complexity by optimal contraction ordering, using the fact that two index sets are disjoint such that $\mathcal{O}(n^{2d})$ becomes $\mathcal{O}(n^{min(|\pi_s|, |\pi_s'|) + 1})$. Examples of this reduction for fixed $d$ can be found in Table 1 in the main response.
Similarly, the time complexity for computing the $d$-order Lancaster interaction estimator na\"ively is $\mathcal{O}(2^{d+1} n^{2d})$. By centring, i.e. eliminating the partitions with singletons, the only element left is $\hat{1}$. Therefore the Lancaster interaction estimator can be computed in $\mathcal{O}(dn^2)$.
Additional examples with stationary time series data:
Following the reviewer's comment on $iid$ data, we show that our $d$-order interaction tests can be generalised to time-series data too. The tests introduced here can be used in conjunction with a permutation procedure that approximates the null distribution by shifting through the time observations following a recent paper [1]. We are thus able to test high-order interactions between $d$ stationary random processes.
We have implemented this strategy and show some results from synthetic data in Figure 4 in the attached pdf. Again, we see that the Lancaster test is able to detect factorisations that corresponds to the partitions with at least one singleton, and the Streitberg test is able to detect any factorisations (dHSIC failed in both a and b). This means that both tests have controlled type I error. In Figure 4c, we also see that the Streitberg and Lancaster tests reach full power with fewer time observations compared with dHSIC when testing joint independence. Similarly when testing high-order interactions in Figure 4d, Streitberg is more data-efficient compared with modified dHSIC.
[1] Liu, Zhaolu, et al. "Kernel-based Joint Independence Tests for Multivariate Stationary and Nonstationary Time-Series." arXiv preprint arXiv:2305.08529 (2023). | Summary: The paper is a nice introduction to measuring high order interactions between groups of random variables and an experimental demonstration of measuring these interactions for up to 4 variables inclusively. The main focus is on the Lancaster and Streitberg interactions tests via kernel methods such as the Hilbert Schmidt Information Criterion. After presenting a mix of prior and, possibly, novel results some experimental evidence is provided that the approach works.
Strengths: 1. The topic of capturing interactions beyond binary is interesting and largely avoided for lack of adequate tools.
2. The paper is generally well written and the didactic value is great. It can serve as an accessible introduction into the topic.
Weaknesses: 1. The paper's contribution is unclear. Some of the claims from the abstract are already known previously published work, such as the link to the lattice theory (Streitberg) and the work with d=4 [33, 34].
2. The paper claims efficiency of the described permutation procedure but besides high level considerations of its asymptotic behavior no experimental characterization is provided. Especially the interplay of efficiency and accuracy. When the procedure is supposed to be faster for a smaller number of samples how fast does the accuracy drop?
3. I found the presentation unclear. On one hand extreme novelty and generality with respect to d is claimed. On the other, experiments are only shown for d up to 4 (previously available results).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. What is the single technical novel result in this paper?
2. Experiments demonstrating the effect of the interplay between the accuracy and computational complexity. Computational complexity numbers reported for all runs of the fMRI experiments.
3. Error bars in Figure 3 are missing. Are results so stable?
(the authors have responded to my questions, which is now reflected in my raised score)
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: nothing to report
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
- **Novelty**: Although, as the reviewer points out, the mathematical link between lattice theory and the $d$-order probabilistic Streitberg measure was indicated in [25] and briefly mentioned in [33], in neither reference were interaction tests implemented owing to their computational complexity. (Reference [34] does not make any references to lattice theory.) Regarding novelty, the main theoretical contribution here is to leverage lattice theory to formulate and optimise $d$-order Lancaster and Streitberg interaction tests that are computable on real data, thus enabling practical applications. More specifically:
First, we formulate the differences between joint independence, Lancaster interaction and Streitberg interactions in terms of the partition lattice and its sublattices.
Second, we define the $d$-order Streitberg and $d$-order Lancaster interaction tests (with improved understanding of the vanishing conditions on the Lancaster interaction) in the kernel setting, and optimise their computational efficiency by eliminating partitions with singletons in the respective lattices.
Third, we propose a recursive computation strategy such that one would only need to compute the inner product if the join of the two partitions is $\hat{1}$.
Fourth, after the optimisation using centring, the Hilbert-Schmidt norm of each high-order measure can be directly obtained from the product lattice (explained in detail in section B of the SI).
Finally, we show that using the interval lattice we can derive a generalised interaction measure whose test formulation can be directly obtained from the 2nd level of the lattice which captures the corresponding factorisations in the sub-hypotheses.
- **Computational efficiency**: We apologise for our lack of clarity. Our comment about efficiency relates to the fact that instead of having to consider all possible factorisations ($B_d$, Bell number of $d$ variables), we only need to perform $(2^{d-1}-1)$ sub-tests of factorisations corresponding to the 2nd level of the partition lattice, since all other sub-tests are consequences of these. This is a substantial reduction in computation that follows from the lattice formulation. We have corrected our wording in the main text to make this clearer.
- **Interplay of efficiency and accuracy**: We thank the reviewer for this remark. To explore this, we have performed 4 different experiments each with 3 different interaction strengths to show how decreasing the sample size affects the null rejection rate (Fig 1, attached). In all cases, as the sample size decreases, the null rejection rates of dHSIC decay much faster than both Lancaster and Streitberg.
- **Higher order $d>4$ and practical applications**: To highlight the generality of the proposed statistical tests to any order, we have performed $d=5$ numerical experiments for both the synthetic examples (Fig 2, attached) and the real-world data (Fig 3, attached). The decay of null rejection rate with decreasing sample size for $d=5$ is included in Fig. 1.
- **Our empirical results provide backing for practical applications**: We have shown that: (i) Lancaster is better than dHSIC for testing joint and marginal independence; and (ii) Streitberg is more data-efficient compared to dHSIC for detecting high-order interactions. Moreover, we show that these tests are practical to compute on standard computers for iid data, and we have now added further examples of applications to stationary time-series data (Fig 4, attached).
- **Computational complexity**: The computational complexity of dHSIC, Lancaster and Streitberg tests for $d=2,3,4,5$ are shown in Table 1 in the main response. For $d=4,5$, we show both the full complexity of the na\"ive enumeration compared to the substantially reduced complexity after optimisation using the lattice formulation. We also show the computational time for all fMRI experiments in Table 2. All experiments carried out on a 2015 iMac with 4 GHz Quad-Core Intel Core i7 processor and 32 GB 1867 MHz DDR3 memory.
- **Error bars**: Thank you for spotting this error. Error bars in the bar chart now added to Figure 3. The results are stable, and this will be changed in the revision.
---
Rebuttal Comment 1.1:
Title: computational complexity
Comment: Thank you for your clarifications and the additional tables. It would be unfortunate, if the tables and the additional plot do not make it either to the main manuscript of the supplement of the paper. I hope this additional work is included in the manuscript if it gets accepted.
My doubt on the real-world applicability still holds. Don't you need to compute the proposed measure on all d-element groups of your 100 variables in order to find the ones that do d-interact?
What about the case, when we have considered all triplets for d=3 and found those that do interact in 3-way. Do we eliminate them from consideration when considering d=4? What if the variables that 3-way interact are a part of a larger clique?
I am raising my score to acknowledge your previous clarifications.
---
Reply to Comment 1.1.1:
Title: Re: computational complexity
Comment: We thank the reviewer for their comments, questions and score change. If accepted, we will certainly include in the revised manuscript all the additional work (figures, tables, explanations, discussion) in these responses.
As the reviewer points out, the problem of detecting $d$-order interactions among a group of $N$ variables is combinatorial, as it entails checking the groups of $d$ variables chosen from the $N$ variables. Clearly, such combinatorial problems become infeasible as $N$ and/or $d$ become large. However, recent work on real data from various application areas has focused on revealing any interactions beyond pairwise, i.e., $d>2$. It has been shown that low order interactions ($d=3,4$) already make a significant difference to network structure and network dynamics [1][2][3], highlighting the potential benefits to be gained from higher order interaction tests.
Furthermore, many well-recognised and widely used theoretical approaches only have closed forms for $d=3$ [4][5], and cannot be generalised to arbitrary $d$ since the number of terms in those approaches is related to the Dedekind number which becomes rapidly intractable [6]. In contrast, our paper shows that the Streitberg interaction can be explicitly defined for any $d$, and we devise theoretical and computational strategies to reduce its computational cost via the lattice theory formulation.
We thank the reviewer for the second comment on the recursive aspects of the computation of $d$-order interactions. Indeed, it is naturally possible to discount the computation of certain higher-order interactions when some lower-order interactions are present. If all the lower order interactions are absent, then testing the $d$-order interaction is the same as testing the joint independence for $d$ variables. For example when $d=3$ and all pairwise interactions are absent, then testing the Streitberg interaction reduces to testing joint independence of 3 variables. More generally, if we test the interactions bottom-up, i.e., recursively from the lower orders upwards, the expression for Streitberg becomes simpler whenever there is a lower-order independence. The simplification is possible because the Streitberg interaction can be rewritten as the sum of the differences between a factorisation and the product of the marginals due to the fact that the sum of Mobius coefficients is zero. Hence the presence of a lower order independence allows us to simplify the $d$-order interaction formula.
Alternatively, there are also cost-reducing simplifications if we do the tests top-down (i.e. starting from order $d$ downwards), although the problem still remains combinatorial. If there are partial rejections for some tests involved in the $d$-order Streitberg test, we can narrow down the possible choices of the factorisation (as discussed in Section C of SI). This allows us to eliminate the lower order factorisations that fall in the lattice branches of the rejected 2nd level factorisations, thus reducing the total tests needed for the factorisation of the $d$-variables. Further simplifications are achieved by accounting for overlaps of the lattice branches.
Your comment on lower-order interactions that may be part of a larger clique also lands on an important direction that complements the above points on recursive efficiency. Let us consider a system whose interactions have been identified using the Streitberg interaction tests up to order $d$, as we have shown for the fMRI data set. This allows us to build a hypergraph (see paper and the additional computations and figures in attached pdf). One could then interpret the hypergraph as follows: i) If a $d$-order hyperedge is detected and none of the lower order $(d-1)$-order hyperedges are present then we say that this $d$-order hyperedge reflects a purely synergistic (or emergent) interaction between the $d$ variables; ii) if the $d$-order and all the lower order interactions are present, then we say that the $d$-order interaction is purely redundant (and corresponds to a simplicial complex construction); iii) if some, but not all, lower order interactions are present (e.g., for 4 variables, only the 4-way interaction and one 3-way interaction are present), then both synergy and redundancy are present in this group of variables. Such a construction could offer an alternative, statistically-motivated approach to computing synergy and redundancy, an important current topic of research in computational neuroscience.
- [1] Battiston et al. Physics Reports 874 (2020): 1-92.
- [2] Santoro et al. Nature Physics 19.2 (2023): 221-229.
- [3] Varley et al. PNAS 120.30 (2023): e2300888120.
- [4] Ince, arXiv:1702.01591 (2017).
- [5] Williams and Randall, arXiv:1004.2515 (2010).
- [6] Van Hirtum et al. arXiv:2304.03039 (2023). | Rebuttal 1:
Rebuttal: We thank the reviewers for their clear reviews and thoughtful questions. In addition to the specific answers in the responses to each reviewer, we would like to briefly address here three overarching themes that have appeared in the reviews: the computational complexity of our method; implications of the analysis of real-world neuroimaging data; and the novel contributions of this manuscript above and beyond prior work. We address each of these in turn:
- **Computational complexity**. As previously stated, our method scales combinatorially in $d$, as would be expected for a method that evaluates $d$-order interactions. Nonetheless, for finite $d$ values relevant in practical applications our method achieves substantial and measurable improvements in complexity (see Table 1 below).
This was achieved by three technical contributions:
1. elimination of partitions with singletons via kernel centring;
2. formulation of an optimal contraction ordering; and
3. a recursive computational strategy.
These reductions follow from our mathematical formulation and links to lattice theory. Moreover, we run various experiments and highlight the practicality of implementing these tests with different sample sizes and also with different data types (i.e., stationary time series) which were previously unattainable without our theoretical improvements. Some of this work was included in the SI, and additional work is explained in the individual responses below. In the revised version of the paper, we will include further discussion of prior work and add experimental results to show explicitly that the ways to optimise using lattice theory are novel leading to substantial reductions in computation.
- **Implications of the results in real-world data**. The key take-away message from our experiments --- beyond the fact that the proposed quantities are tractable --- is that brain activity has substantial and heterogeneous high-order structure that can be revealed using our method. This high-order structure is non-trivial, in the sense that it cannot be predicted from pairwise interactions, and calls for new hypergraph analyses to understand brain function across scales. In the revised version of the paper we will report new experiments emphasising the importance of this high-order structure and discuss links with supporting neuroscientific evidence.
- **Novelty**. The paper's main novel theoretical contribution is the use of lattice theory to formulate and optimise $d$-order Lancaster and Streitberg interaction tests. The novel contributions include:
1. We use the partition lattice and its sublattices to make a qualitative comparison between the information encoded in joint independence, Lancaster interaction and Streitberg interactions.
2. We formalise the $d$-order Streitberg and the $d$-order Lancaster in the kernel test setting and optimise their computational efficiency by eliminating the partitions with singletons in the respective lattices.
3. We propose a recursive computation strategy such that one would only need to compute the inner product if the join of the two partitions is $\hat{1}$.
4. After the optimisation using centring, the Hilbert-Schmidt norm of each high-order measure can be directly obtained from the product lattice (explained in detail in section B of the SI).
5. We show that using the interval lattice we can derive a generalised interaction measure. The respective test formulation is also related to lattice since the corresponding factorisations in the sub-hypotheses can be directly obtained from the 2nd level of the lattice.
**Table 1**. Time complexity for $d$-order interaction estimators.
| $d$-order | dHSIC | Lancaster$\Rightarrow$(optimised) | Streitberg$\Rightarrow$(optimised) |
|-----------|--------------------|--------------------------------------------------------|--------------------------------------------------------|
| 2-way | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^2)$ |
| 3-way | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^2)$ |
| 4-way | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^8)$$\Rightarrow${$\mathcal{O}(n^2)$} | $\mathcal{O}(n^8)$$\Rightarrow${$\mathcal{O}(n^3)$} |
| 5-way | $\mathcal{O}(n^2)$ | $\mathcal{O}(n^{10})$$\Rightarrow${$\mathcal{O}(n^2)$} | $\mathcal{O}(n^{10})$$\Rightarrow${$\mathcal{O}(n^3)$} |
**Table 2**. Time for experiments in Fig 3.
| | SOM | VIS | SAL | DAN | DMN | FPN | LIM | Random |
|-----------|----------|---------|-------|--------|---------|--------|-----|--------|
| 2-way | 1s | 2s | 1s | 2s | 2s | 2s | 1s | 12s |
| 3-way | 12s | 18s | 7 | 13s | 8s | 8s | 1s | 13s |
| 4-way | 5m36s | 5m | 2m44s | 2m31s | 2m | 2m10s | 1s | 2m41s |
| 5-way | 2h18m24s | 1h30m9s | 1h3m | 54m40s | 1h12m4s | 27m16s | 2s | 29m |
**Table 3**. Number of terms in the test statistics before and after eliminating the partitions with singletons.
| # of var | 4 | 5 | 6 | 7 | 8 | 9 |
|----------------------------------|--------------------|---------------------|----------------------|-----------------------|------------------------|--------------------------|
| Streitberg$\Rightarrow$(centred) | 15$\Rightarrow$(4) | 52$\Rightarrow$(11) | 203$\Rightarrow$(41) | 877$\Rightarrow$(162) | 4140$\Rightarrow$(715) | 21147$\Rightarrow$(3425) |
| Lancaster$\Rightarrow$(centred) | 12$\Rightarrow$(1) | 27$\Rightarrow$(1) | 58$\Rightarrow$(1) | 121$\Rightarrow$(1) | 248$\Rightarrow$(1) | 503$\Rightarrow$(1) |
Pdf: /pdf/de2e5b51e7b56218fb49678c2caef40425d5121b.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Generator Born from Classifier | Accept (poster) | Summary: This paper tackles the problem of reconstructing an image generator, without relying on any data. The authors propose a learning paradigm in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied over the generated distribution of the samples.
Strengths: 1. The paper addresses an important and challenging problem.
2. The method appears to be novel.
Weaknesses: 1. The evaluation seems to be weak. the authors have performed the qualitative evaluation alone. The experimental section seems to be incomplete without quantitative evaluation using metrics such as precision, recall, and FID.
2. Related works on Data free knowledge distillation are missing.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Why does the generator generate realistic-looking samples even though it could cheat by learning to generate adversarial examples?
2. Are there any experimental validations/theoretical justifications to confirm if the model is not just reconstructing the training data examples?
3. Can the generator capture all the modes of the training data distribution sufficiently well?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors haven't discussed the potential negative social impact of their work, in particular, these methods could be used to recover the training data which may cause privacy concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Why does the generator generate realistic-looking samples even though it could cheat by learning to generate adversarial examples?**
We appreciate the reviewer's question. Indeed, our loss design inherently penalizes the generation of adversarial samples. Since our loss is derived from the KKT conditions of the optimization problem stated in Theorem 1, it mandates the training of a generator such that the samples it produces ensure the optimality of the classifier network parameters. An intuitive explanation is as follows:
Our approach requires the generator to address this question: "How should I generate data to ensure that when a classifier is trained using the data I generate, the resulting classifier parameters precisely match the given classifier's parameters?"
Given that overparameterized neural networks tend to overfit their training dataset, the generated data distribution needs to closely resemble the real data distribution to guarantee the optimality of the classifier network parameters. As a result, our method is capable of producing realistic-looking samples that are close to the true training data distribution.
### **Are there any experimental validations/theoretical justifications to confirm if the model is not just reconstructing the training data examples?**
We appreciate the reviewer's question. To demonstrate that we are not merely reconstructing the original data, we conducted an attribute editing experiment and show the results in Fig. 1 in the attached PDF file. By manipulating the input noise, we generated continuously rotating digits, which are not included in the original dataset. This experiment demonstrates that we can indeed generate samples not present in the original dataset, akin to a generator model trained directly on the data.
### **Can the generator capture all the modes of the training data distribution sufficiently well?**
We appreciate the reviewer pointing out the issue with our method in terms of capturing all the modes equally. As shown in Fig. 4 in our manuscript, our method does not perform well in reconstructing certain modes (certain categories). We hypothesize that there might be two reasons leading to this phenomenon. First, different categories may have varying complexities in their data distributions, making it more challenging for the generator to produce certain categories. Second, the generator might inherit the classifier's bias. If the classifier does not fit well for certain categories, the information about that category in the classifier might be limited or inaccurate. This, in turn, would affect the generator's performance on that particular category.
### **Evaluation**
We appreciate the reviewer's suggestion to include more evaluations. We have calculated the FID and the precision and recall. The results are shown in Tab. 1.
### **Related Work**
We appreciate the reviewer's suggestion to include related work on Data-Free Knowledge Distillation. This will be incorporated into our revised version.
### **Potential Social Impact**
We appreciate the reviewer pointing out the potential negative social impact of our method in terms of privacy leakage. We mentioned the risk our method poses to privacy protection in lines 319-320. In our revised version, we will further expand the discussion on this potential negative social impact.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response.
[Are there any experimental validations/theoretical justifications to confirm if the model is not just reconstructing the training data examples?]
The attribute editing experiment seems to be quite insightful. A quantitative evaluation would also be quite helpful in this aspect.
Regarding performance
From the table 1, it seems that quality of the generated images is inferior to that of Li, 2022. I assume the advantage would be then only be then with respect to complexity. Am I correct?
Apart from these, the authors have clarified all my concerns. Feedback on these additional points would be appreciated.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer LZcP
Comment: ### **Are there any experimental validations/theoretical justifications to confirm if the model is not just reconstructing the training data examples?**
#### **Quantitative Evaluation**
We appreciate the reviewer's question. To quantitatively assess whether our method can generate images distinct from the training data, we measured the mean-max Structural Similarity Index (mean-max-SSIM). The computation for this metric is as follows: (1) For one generated image, we measure its SSIM against every image in the classifier's training dataset and select the maximum SSIM. This step measures the similarity between the generated image and the most similar image from the training set. (2) For a set of generated images, we perform step (1) for each generated image and then average the resulting max-SSIM values to obtain the mean-max-SSIM. A lower mean-max-SSIM indicates that the generated images, even when compared to their closest counterparts in the training set, are relatively distinct, suggesting the generation of novel data outside the original dataset.
We compared our method with Haim et al., 2022, which is designed for reconstructing the training dataset. The FID scores of our method and Haim et al., 2022 are comparable, which are shown in Tab. 1 in the attached PDF file. However, our method exhibits a lower mean-max-SSIM, indicating its capability to produce novel images distinct from the original dataset.
| | **Ours** | **Haim et al.,2022** |
| :-: | :-: | :-: |
| mmSSIM | 0.012±3e-4 | 0.020±1e-4 |
### **Comparison with Li, 2022**
We thank the reviewer for the question. To clarify further, our approach differs from Li, 2022 mainly in two aspects. Besides the complexity aspect the reviewer mentioned, the other distinction is that Li, 2022 necessitates prior information from the original dataset to guide image generation. By contrast, our method operates without this information.
Specifically, Li, 2022 requires the knowledge of the mean of training images and incorporates the norm between the generated images and the mean of training images as a regularization term in the loss function. We experimentally observe that the method in Li, 2022 heavily relies on this prior knowledge, without which, the method tends to produce adversarial samples. | Summary: This paper addresses a novel and intriguing issue, namely, training a generator reliant on a pre-trained classifier, rather than extensive training data. The authors propose an innovative solution, fundamentally aimed at enabling the generator's training process to continuously extract and leverage information about the dataset distribution from the classifier's parameters. Based on the theory of Maximum-Margin Bias of gradient descent, the authors design stationarity loss and duality loss to ensure that the distribution of data generated by the trained generator guarantee the optimality of the pre-trained classifier. As an expansion, the paper also proposes an algorithm to collectively train a generator using information from multiple pre-trained classifiers. The proposed learning paradigm is empirically validated through experiments in various image generation tasks.
Strengths: This paper investigates an intriguing and innovative task: training a generator using information from the parameters of pre-trained classifiers, rather than relying on training data. Given the vast number of pre-trained models our community has accumulated, I view this as a task of substantial value with potential impact.
The paper clearly outlines its motivation and problem definition, and offers a technically sound and concrete method, underpinned by the theory of Maximum-Margin Bias of gradient descent. The paper processes a clear logical structure and reasonable organization. The proposed method and the technical details is validated through several convincing proof-of-concept experiments, as shown in Fig. 3, 4 and 5. The experiments across multiple image generation tasks exhibit promising results.
Weaknesses: Considering that this is a new attempt in this task, the experimental setup is understandably simple. Both the generator and classifier architecture utilized are elementary compared to the practice today. Besides this, I have some minor comments, which I have listed as questions below.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Ln 226 mentions that the construction of the system of equations depends on certain random samples. Are these random samples drawn from the original training dataset? If the solution for \Lambda relies on samples from the training dataset, does it imply that the method proposed in the paper intrinsically presumes \Lambda to be known?
2. The dataset used for training the classifier in the paper is relatively small, consisting of only 500 samples. What is the reasoning behind this setup? How would increasing the size of the training dataset impact the results?
3. Typo
a. Ln. 241, the set of classifiers is missing {}
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Owing to the theory of Maximum-Margin Bias, the proposed method has limitations regarding the architecture of the pre-trained classifier and the pre-computation of \Lambda. However, the authors have adequately addressed these challenges.
The paper also points out potential privacy risks that may arise from the application of the proposed method, as well as outlining some potential applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **How the random samples are generated when calculating Lambda?**
We appreciate the reviewer's question. The definition of a quasi-homogeneous function is independent of the dataset; here, the random samples refer to random noise, not samples selected from the original dataset. For instance, if the classifier's input is an image of size 3xHxW, sampling random noise of size 3xHxW as the classifier's input is sufficient for determining Lambda. Therefore, the value of Lambda does not depend on the training dataset but solely on the pre-trained neural network. Given a neural network, Lambda can be determined using the method described in the 'Determine Lambda' section of the paper, so it does not need to be known in advance.
### **Concern about the Number of Training Samples**
We appreciate the reviewer's inquiry about our experimental setup. Since the validity of the Maximum-Margin Bias theory depends on the convergence of the pre-trained network, we used a small dataset to ensure the classifier's convergence. We conducted experiments using larger datasets, and the results are presented in Tab. 5 in the attached PDF file. As the number of training data increases, it offers more information about the data distribution, leading to an enhancement in the quality of generated images.
### **Typos**
We appreciate the identification of our typos, and we will correct them.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for your rebuttal. I have no more questions and keep my positive score. | Summary: The paper propose a method to generate image samples from a trained neural classifier. It proposes a loss to train an image generator where the loss is based on recent results from the realm of the implicit bias of gradient descent of quasi-homogenous functions. It shows empirical results on 2D data and on models trained on MNIST and CELEBA datasets.
Strengths: - Extension of reconstruction to quasi-homogenous neural networks
- Trying to create a generator by using a fixed prior on top of the reconstruction scheme
Weaknesses: I think that the main weakness is the novelty in the paper. The proposed paper seems like an extension of Haim et al. 2022 to multiclass quasi-homogenous network, using a "generative" prior. However:
- Note that a multiclass extension to Haim et al was done in Buzaglo et al. 2023 ICLR workshops (https://openreview.net/forum?id=SBstNm4OajH), which is somewhat concurrent).
- the extension of the reconstructoin loss to quasi-homogenous networks is very slim - technically speaking, the only difference is the introduction of $\Lambda$ in the loss. If this is truly necessary it should be supported by some comparisons to Haim et al. which also showed results on non-homogenous networks. Such comparisons are not provided in the paper.
- The generative prior is interesting, but there is no evaluation in the paper for a generative model. All in all, the results are quite similar to those of Haim et al. only it seems that they were not fully converged (what is the difference between a "bad" reconstruction and a "generated sample"?)
The paper is poorly written - terminology is cumbersome and hard to follow, which makes it difficult to understand key components of the paper. Many important technical details are missing from the paper.
Some comments in introduction and related section are unclear or irrelevant.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Evaluation is lacking:
- the main claim of the paper is the proposed generator that is being born from a classifier. But there are no evaluation whatsoever of generative models. Many common metrics for evaluating generative models exists like FID or IS (for GM that models a classified datasets), but non of these are provided.
- There are only very few visual examples in the paper, and no further analysis
- The method proposes creating a generative model - what is the point in comparing the outputs of the model to their nearest neighbors (Fig. 6,7,8)? (there are established methods for evaluating generative models, see above)
- No comparison to baselines: basically, the results look like a lesser version of the Haim et al. results. What happens if the paper uses Haim et al. reconstruction loss on their trained models? The only concern may be of the multiclass models - such an extension is shown in Buzaglo et al. 2023, which is somewhat concurrent to this submission. However, the current submission can show a comparison of their proposed method to that of Haim et al. only on binary classifiers.
- Some other baselines could be other works that try to "turn" a classifier to generator:
1) Use Classifier as Generator, Li 2022
2) Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs, Wang and Torr, 2022
Paper is poorly written. Main parts of the paper are difficult to follow and understand:
- The main contribution in the paper is the training of a generator (as assumed from the title). However, the discussion of the generator itself is somewhat hidden in lines 185-186. It took me a while to figure out that this is the so called generator that the paper discusses. I suggest emphasising this element much more.
- the discussion of determininig $\Lambda$ (in lines 209-229) - I can't say that I understood the derivation. why is $\Lambda' = \frac{1}{1-\Lambda_{ii}}\Lambda$? I could not understand the two properties. I also did not understand the plot showni in Fig. 3
- discussion in lines 201-205 - why is this network h and ReLU is necessary? this part is very cryptic and not clear (not the mechanism itself, nor the motivation or intuition behind this design choice).
- What is the difference between Eq. (7) and Eq. (12) - it seems like the exact same equation
- Why is the extension to multiple classifiers (lines 239seqq.) necessary? what is the motivation for that? I also could not understand the mechanism in this paragraph.
Many important technical details are missing from the paper:
- what is the architectures of the trained models? said to be quasi-homogenous
- what is the architecture of the generator g?
- what is the architecture of the network h? (in line 202)
Some comments in introduction and related section are unclear or irrelevant:
- "training process does not rely on any training data" (line 44 and 56) is a bit misleading - it relies on the data on which the classifier was trained. (also it seems from eq.4 that the number of training samples N is assumed to be known).
- "GAN require an additional classifier" (line 71) - what is the meaning here? unconditional GANs do not require additional classifiers
- "the classifier is trained concurrently with the generator" (line 73) - is this referring to the discriminator? because this is not the classifiers in the sense that are used in the paper. The current paper submission discuss "usual" classifiers (that solve the classification problem of a supervised classification dataset) and discriminators are classifying between true and fake samples.
- References in line 85 - "model inversion... given the output and the trained model" - Yin et al. 2020 do not assume the outputs but rather infer training samples from the trained batchnorm statistics. Gal et al. 2022 is really irrelevant here (textual "inversion" tries to find a token in the input of the text encoder that corresponds to a given set of images. It has nothing to do with training samples reconstruction).
Minor:
- line 207 and supplementary 68 - L_{lagrange} should be L_{stationarity}?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Comparison with [1] Haim et al. 2022 and [2] Buzaglo et al. 2023**
We appreciate the reviewer's comparison of our work with [1] and [2]. We kindly direct the reviewer to the global response, where we systematically compare ours with [1] and [2].
#### **The importance of Lambda**
Lambda's introduction is vital for complex classifiers. While [1] mandates homogeneous networks, we allow quasi-homogeneous ones. This necessitates solving for Lambda in each network and integrating it into the loss of generator training.
### **Evaluation**
#### **Comparisons with [3] Li, 2022 and [4] Wang and Torr, 2022**
We appreciate the reviewer's suggestion to compare our method with those in [3] and [4]. We kindly direct the reviewer to the global response, where we systematically compare ours with [3] and [4].
#### **Quantitative Metrics**
Thanks. We compared the quantitative metrics of our method with [1] and [3] and reported in Tab. 1 in the submitted PDF for the results.
Due to the fact that [4] did not release their source code (the provided GitHub link directs to a non-existent repository), we attempted to implement their work. However, given the tight timeframe for the rebuttal, we were unable to fully reproduce the results reported in their paper. As a result, we have not included the results from [4] at this time.
#### **Further Analysis**
We are grateful for the reviewer's suggestion about further analysis. We conducted an attribute editing experiment and show the results in Fig. 1 in the attached PDF file. By manipulating the input noise, we generated continuously rotating digits, which are not included in the original dataset, akin to a generator model trained directly on the data.
#### **The reason for comparing with the nearest neighbors**
We appreciate the question posed by the reviewer. Our comparison with nearest neighbors serves merely as a reference to illustrate the perceptual quality of the images generated by our method. By comparing with real data, it's evident that our generator produces images with good quality.
### **Writing**
We greatly appreciate the reviewer's suggestions and will make modifications accordingly.
1. We will highlight the sections about the generator.
2. We will further clarify our discussion on solving for Lambda.
* Solving for Lambda involves two steps: (1) constructing a system of linear equations about Lambda and (2) solving the system.
* Two properties used to construct the system are introduced in Section 3 of "The Asymmetric Maximum Margin Bias Of Quasi-Homogeneous Neural Networks".
* We will include the derivative of Lambda' in the revised version.
* The purpose of Fig. 3 in the manuscript is to verify whether the Lambda obtained by our method accurately estimates the true Lambda of the network. According to the definition, given a quasi-homogeneous model $\Phi$, we have $\Phi(x;e^{\alpha \Lambda}\zeta) = e^{\alpha}\Phi(x;\zeta)$. If the estimated Lambda is correct, then substituting it into the left-hand side of the equation, the equation should still hold. In Fig 3, we plotted the values of the right-hand (solid line) and the left side of the equation (dotted line) with estimated Lambda for different $x$ and $a$. The overlap of the two lines indicates that our estimate is accurate.
3. The introduction of $h$ and ReLU in lines 201-205 is for the estimation of the KKT multiplier $\mu$. Given a pair of generated data $x$ and label $y$, $\mu'=h(x,y;\eta)$.
Since the KKT multiplier must be non-negative, we let $\mu=ReLU(\mu')$.
4. Eq. 7 and 12 represent the loss function used for training and the optimization problem used for training, respectively. In the revised version, we will consolidate Equations 7 and 12 into a single equation.
5. The extension to multiple classifiers.
* Integrating multiple classifiers for training a generator can lead to a more powerful generator. Firstly, different classifiers encounter different data (see the case in Fig 5 in our manuscript), and combining knowledge from multiple classifiers can enhance the diversity of generated samples. On the other hand, using multiple classifiers is also a means to improve the performance of the resulting generator. As shown in Tab 4 in the attached PDF, a generator obtained using two smaller classifiers performs better than one obtained using a lrger classifier. This is because the two classifiers each fit the data they encounter without interfering with each other, thereby preserving the information from the training dataset more effectively.
* We provide an example of our mechanism: The input to the generator is (noise, label, classifier index). If there are two classifiers, trained on even and odd numbers from MNIST, respectively. To generate the first even digit '0', the input to the generator is (noise, 0, 0); to generate the second odd digit '3', the input is (noise, 1, 1).
### **Technical Details**
We thank the reviewer. We indeed include the structure of the classifier and that of the generator in ln 291-294 and ln 295-299, respectively. Specific architectures of the classifier, generator, and net $h$ are shown in Fig. 2 in the submitted PDF. We will also release our code.
### **Other Comments in Introduction and related work**
We appreciate the reviewer's detailed suggestions, and we will make modifications and add explanations accordingly.
1. What we intend to convey here is that the training process of the generator does not require the use of data, but merely a pre-trained classifier and the number of training samples, which does not contain information about data distribution.
2. In lines 71 and 73, the classifier we mention is the discriminator, as the discriminator is indeed a binary classifier.
3. We will revise rigorously the discussion about model inversion in the related work based on your comments.
### **Typos**
We thank the reviewer for pointing out our typos, and we will correct them.
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the authors for their elaborated rebuttal.
As it seems by looking at the FID/ISC scores, the results are not very different from Haim et al. 2022, and both are pretty "bad" for a generator (very high FID). The way I see it, the main reasons are:
- I'm not convinced that the proposed method in the current submission really result in different outputs than simply using Haim et al. 2022 (without the estimation of the lambda etc.).
- If the "definition" of a generative model is to model a certain distribution, then using the implicit bias results of Lyu&LI 2019, Kunin et al. 2023 just doesn't make sense. Why would inverting the classifier result in a generative model in the first place? Sample reconstruction is different than modeling a distribution. Simply showing that some reconstruction methods actually work, is not equivalent to saying that they constitute a generative model. This is evident from the FID results, and is the reason why they are so "bad" for a generative model. I assume that many outputs from the proposed "generator" in the work are outputs that doesn't look at all like images from the dataset, because these outputs are in a way solution to the KKT conditions (of the quasi-homogenous case) - which has nothing to do "in general" with modeling the true distribution of the training set. These "bad" outputs are probably also the reason for the very high FID score.
On the good side, I wasn't aware to the difference in the number of parameters that are being optimized. This is pretty interesting, and should be emphasized. I also think that the results on attribute editing are impressive - but the details on how they were produced could be clearer.
On the bottom line, I find it hard recommending acceptance for the paper in its current form.
It would be much easier if the paper was more clearly presenting the novelty w.r.t to previous works and more carefully comparing to them. If the paper was more clearly written, especially the technical parts, and implementing the rest of the remarks in the reviews (which some of the were answered in the rebuttal). Thanks.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer KxBC
Comment: ### **Further Comparison with [1] Haim et al., 2022**
#### **To address:** ***"by looking at the FID/ISC scores, the results are not very different from Haim et al. 2022 .."*** **and** ***"I'm not convinced that the proposed method in the current submission really results in different outputs than simply using Haim et al. 2022 .."***
We appreciate the reviewer's concern. Our global response and Tab. 1 in the PDF compare our method to [1] regarding perspectives, such as complexity and assumptions. Here, we also compare our outputs with [1]'s.
1. The objective of [1] is to recover images, and the recovered images are their output. However, our goal is to train a generator. The output of our method should be the trained generator, not the images produced by the generator. This generator can produce perceptually satisfying images not present in the original training data, supporting conditional sampling and attribution editing. These capabilities are indeed beyond the reach of [1].
2. While our method and [1] might not show significant differences in terms of FID/ISC, our parameter space is smaller, and the image generation process is less complex. This indicates that we achieved comparable FID/ISC with a reduced cost, highlighting an advantage of our method over [1].
3. The fact that our method and [1]'s approach exhibit no significant difference in FID/ISC, does not imply that the generated images of our method and [1] are largely similar. As shown in Fig.4 and 7 of the attached PDF, it's evident that our method produces images with clearer outlines and more complete shapes.
Admittedly, given the ambitious nature of our proposed task and being a pioneering effort in this direction, the qualitative result of our approach is not that impressive. This is to be expected, given the inherent challenges of training a generative model, especially as our training was conducted without accessing any data.
#### **To address** ***"Why would inverting the classifier result in a generative model ...?"***
We thank the reviewer's question and address it from the following three perspectives.
1. Feasibility: A generative model is essentially an approximation of the conditional probability $P(x|\epsilon, y)$, and its training relies on knowledge about the data distribution $P_x$. When data is available, information regarding $P_x$ can be acquired by accessing the training data directly. However, in our task, training data is absent and only a pre-trained classifier is available. To tackle this task, we adopt the Maximum-Margin Bias (MMB) theory, which the reviewer kindly referred to as the implicit bias theory, to extract information related to training data embedded within the classifier parameters and then train the generator. The crux of this approach's feasibility lies in the sufficiency of the information extracted by the MMB theory. This sufficiency is supported by the findings in [1], as it's conceivable to train a generator using data reconstructed according to [1].
Such an idea demonstrates the sufficiency of the information the MMB theory can extract, affirming the feasibility of our approach.
2. Parameterization: The distinct parameterization ensures that our approach trains a generator, rather than merely reconstructing data as in [1]. To reconstruct training images, the learnable parameters in [1] are the image pixels themselves. By contrast, we designed a neural network where the network inputs are random noise and label index, and the network output is the image. The weights and biases of this neural network are our learning targets. While both our work and [1] design the loss function based on the MMB theory, this difference in parameterization directly leads to a functional divergence: [1] yields images, while our method produces a generative model.
3. Experimental Evidence. As demonstrated by our experimental results, we indeed obtained a generator with the capabilities of conditional sampling and attribution editing, though it currently underperforms in metrics like FID.
We are also grateful for the question regarding the KKT conditions.
As elaborated in both our manuscript and [1], it's acknowledged that the KKT conditions primarily ensure the extraction of information from data points on the classification margin, which represents only a subset of the training dataset. Consequently, during training, our generator predominantly receives supervision signals from these data points on the margin. For the generation of points outside this margin, the generator depends on its learned generalization ability, which might be a reason for the varied quality of certain generated samples. Nonetheless, it's important to recognize that our model indeed operates as a generator. It transforms from the noise distribution to the real data distribution (or more precisely, to the distribution over a subset of training samples), and it also possesses various functionalities typical of generators, as highlighted above. | Summary: The work extends the dataset reconstruction (DR) method [1] of reconstructing the dataset from a classifier to the generative scenario. It provides several extensions to it:
- Instead of reconstructing particular data points, it aims to build a generator for the original dataset.
- Extends to the multi-class setting (from a binary classifier).
- Moves from homogeneous to $\Lambda$-quasi-homogenous assumptions for the NN classifier.
The method is tested on MNIST and CelebA and is able to generate samples which resemble the real ones.
[1] Haim et al. "Reconstructing Training Data from Trained Neural Networks"
Strengths: - The overall idea of obtaining a generator from a classifier is very interesting and potentially very influential.
- The work is genuine and open about its strengths and limitations. It feels like one of the rare works which is "what you see is what you get".
- The method is quite novel and shows some promise to be working. I guess, the current ideas might be extendable to GAN training (or at least Auxilliary Classifier GAN training).
- The source code was provided, which should improve the reproducibility.
- Table 1 with the notations in the appendix was quite helpful.
Weaknesses: - The paper is not self-sufficient and not possible to understand without reading the DR paper [1] first (at least, I failed to). It lacks intuitive exposition for the general audience who do not have vast background on the related work. Also, there are some confusing formulations on their own — e.g., Eq 5 and 6 define the loss term over $\theta$ and $\eta$, but they do not explicitly appear in the equation itself (only implicitly through $x_i$). And at the same, $x_i$ is not uniquely defined — it denotes both real and fake data. Also, sometimes it appears in bold (L123), and sometimes in normal font (e.g., Eq 1b). Also, Equation 12 is more or less a tautological repetition of Equation 7.
- The experimental results do not seem to be strong. E.g., checking the images in Figure 1b, they are hardly recognizable. While for [1], some restored images look quite reasonable, even considering it's CIFAR10 (a more difficult dataset than CelebA).
- There should be a comparison with standard approaches of generating samples from a classifier (e.g., DeepDream or DeepDream from an adversarially robust classifier [3]). After checking some random DeepDream implementation on github [2] (probably untuned), it's hard to tell which samples are really better.
- Limitations are not discussed
Typos:
- L181: "objective, We" => "objective, we"
- L234: "during the training" => "during training"
[2] https://github.com/ianscottknight/deep-dream-implementation-on-mnist-using-pytorch-hooks/blob/main/1.0-deep-dream.ipynb
[3] https://arxiv.org/abs/1906.09453
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: My two biggest concerns are the writing quality and the scalability of the method, but I'm unsure how they could be addressed during the short rebuttal period. Another interesting direction would be exploring adversarially robust classifiers [3] (I guess, adversarially robust training should prevent the landscape from covering spurious minima and hence help the generation quality).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: Limitations are not discussed at all.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Scalability**
We thank the reviewer for pointing out the concern about the scalability of our method. To address the concern, we scale up the classifier and conduct the experiments. We consider two types of architectures: (1) a 10-layer fully-connected network and (2) a network with 3 convolutional layers and 3 fully-connected layers. The quantitative scores corresponding to different classifiers are shown in Tab. 2 in the attached PDF file. After scaling up the classifier the ISC increases. This indicates that the scale-up of the classifier helps to preserve more information about the training data and thus improves the performance of the generators.
### **Writting**
We appreciate the reviewer's comment regarding the writing. We will enhance the clarity in the introduction and method sections by incorporating more intuitive explanations of our approach. The main reason for our current writing style is our desire to gradually derive our method from a theoretical standpoint, which necessitated the introduction of numerous mathematical symbols and concepts. In the revised version, we will provide more intuitive explanations for the mathematical symbols and analyses employed.
### **Exploration of the Adversarially Robust Classifiers**
We are grateful for the reviewer's suggestion to explore the direction of adversarially robust classifiers. We have replaced the standard classifiers used in our method with adversarially robust classifiers and conducted experiments accordingly. Tab. 3 compares our method with an Empirical Risk Minimization (ERM) classifier, our method with adversarially robust classifiers, and the method mentioned in [3]. It is evident that our method, when combined with adversarially robust classifiers, indeed enhances the quality of the generated images. However, [3] outperforms our method with an ERM classifier. The reasons for this result are two-fold. First, [3] processes a different setting from ours. It optimizes each image individually, thus having a significantly larger optimization space than our method. We are aiming at learning the parameters of a generator and use it to generate all images. Second, [3] utilizes strong prior information, such as the mean and variance of the samples in each class. However, such prior is not used in our method.
### **Self-Sufficiency and Prerequisite Knowledge**
We extend our sincere gratitude to the reviewer for advising us to enhance the self-sufficiency of our paper and to incorporate prerequisite knowledge. We will add a section in the supplementary material providing an intuitive introduction to the background and preliminary knowledge, thereby facilitating a more accessible understanding of our work for our readers.
### **Further Explanation of Some Formulations**
We appreciate the reviewer's attention to detail in our manuscript.
1. In Eq. 5 and 6, the generated samples $x_i=g(\epsilon, y_i;\theta)$ and the Lagrange multipliers $\mu_ic=ReLU(h(x_i,y_i;\eta)[c])$ are produced by neural network $g$ and $h$, respectively. $\theta$ and $\eta$ represent the parameters of these networks. To avoid overcomplicating the equations, we omitted the specific expressions for $x_i$ and $\mu_ic$ in Eq. 5 and 6. However, we will provide their complete expressions in the revised version.
2. To distinguish between real data samples and generated samples, we will employ different symbols.
3. The symbol $x_i$ should always be in bold, and we will ensure this is consistently applied in the revised version.
4. Eq. 7 and 12 represent the loss function used for training and the optimization problem used for training, respectively. In the revised version, we will integrate Eq. 7 and 12 into a single equation.
### **Performance and Comparison with [1] Haim et al. 2022**
We greatly appreciate the reviewer's comparison between our work and [1] Haim et al. We kindly direct the reviewer to the global response, where we systematically compare our approach with that of [1] Haim et al.
Admittedly, given the highly ambitious nature of the proposed task and being the first attempt along this line, the performance of our approach, as expected, is indeed not surprising, since our training does not rely on any data and training a generator is inherently challenging.
### **Comparison with DeepDream**
We thank the reviewer for comparing our work with DeepDream. We kindly direct the reviewer to the global response, where we systematically compare our approach with that of DeepDream.
### **Limitations**
We appreciate the reviewer's suggestion about the limitation section, which we will incorporate into the revised version. Our method has two primary limitations.
(1) Estimating Lambda introduces additional computational overhead. Given that our approach requires the classifier to be a quasi-homogeneous model, we need to determine the classifier's Lambda before training the generator. The method we provide in our paper for calculating Lambda can be computationally intensive, especially when the classifier has a large number of parameters.
(2) Our method exhibits class bias, with significant variations in the generation quality for different categories. We hypothesize that there might be two reasons leading to this phenomenon. First, different categories may have varying complexities in their data distributions, making it more challenging for the generator to produce certain categories. Second, the generator might inherit the classifier's bias. If the classifier does not fit well for certain categories, the information about that category in the classifier might be limited or inaccurate. This, in turn, would affect the generator's performance on that particular category.
### **Typos**
We are grateful to the reviewer for pointing out the typos in our manuscript. We will rectify all such errors in the revised version.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I am thankful to the reviewers for providing the rebuttal, clarifications and additional results. I decided to increase my rating to "Weak accept", because I find this submission to be quite novel and non-mainstream, and believe that NeurIPS should be welcoming to the works with unusual ideas even they are not tuned well.
My two remaining concerns are:
- The quality is not good enough and for some people might look like negative results. You can consider applying the technique on top of existing image generators pre-trained for a different objective to show nicer images (note: please, do not treat it a request for additional experiments, I'm just thinking out load).
- In its current form, the paper will be a tough read for the image generation community. Since the results are not striking, people will not invest effort into trying to understand such a difficult work. I increased the rating hoping that this will be improved in the revised version.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer gUqt
Comment: We truly appreciate the reviewer for the recognition of our paper and for the valuable feedback. We will incorporate the suggestions and comments the reviewer provided in the “official review” and “response to the rebuttal” to enhance our manuscript, improve its readability, offer a more intuitive background introduction, elucidate our methodology further, and enrich our experimental results. | Rebuttal 1:
Rebuttal: Esteemed Senior AC, AC, and Reviewers,
We deeply appreciate the reviewers' and ACs' dedication to reviewing and managing our submission. It is with great pleasure that the reviewers highlighted several strengths of our work, such as the importance and the potential impact of the idea/task/problem [gUqt, 5Ctm, and LZcP], the novelty of our method [gUqt and LZcP], the soundness of our method [5Ctm], the promise of our method to be working [gUqt] and the promising results [5Ctm].
**We have additionally uploaded a PDF file containing all supplementary tables and images for perusal.**
Thanks once again for the valuable comments and consideration.
Respectfully,
Authors of 'Generator Born from Classifier'
---
### **Comparison with Other Methods**
Using the global response, we systematically draw comparisons between our approach and the methods mentioned by the reviewers.
We also provide Tab. 1 in the submitted PDF file as a summary of the comparison, which provides an overview of the differences and similarities among the methods and also includes the quantitative evaluation on the MNIST.
#### **Comparison with DeepDream**
We thank the reviewer gUqt for mentioning DeepDream. DeepDream aims to visualize patterns learned by neural networks, while our method aims to train a generator for the conditional sampling of images. Technically, to generate each image, DeepDream requires a gradient optimization process to maximize a certain activation or network output. By contrast, once our generator is trained, images can be produced by sampling random noise. Thus, (1) DeepDream has larger computational complexity and takes longer to generate each image compared to our method; (2) DeepDream has a larger parameter space.
#### **Comparison with [1] Haim et al. 2022 and [2] Buzaglo et al. 2023**
We thank the reviewer gUqt for comparing our work with [1], and reviewer KxBC for comparing our work with [1] and [2].
By the NeurIPS 2023 policy, our work and [2] are concurrent works.
Herein, we group [1] and [2] together for a comparative analysis with our method.
1. **Task.** Our goal is to train a generator network, which is capable of transforming random noise into data of good perceptual quality.
By contrast, [1] and [2] aim to recover training data. Our task is more challenging in two respects. (1) we aim to generate samples that are not present in the original dataset but have good perceptual quality. (2) our generation process is controllable and supports conditional sampling.
2. **#Categories.** [1] only supports binary classifiers, while our work and [2] support multi-class classifiers.
3. **Supported Networks.** [1] and [2] only support homogeneous networks. By contrast, our method supports quasi-homogeneous networks, which encompass a broader range of networks. The experiments about non-homogeneous models in [1] only involve fully connected networks with bias terms. The networks in our experiments are more complex and diverse.
4. **Technical Aspect.**: [1] and [2] optimize directly on the pixel space. Specifically, #optimizable parameters equals $BatchSize×ImageSize$. In [1], batch size is set to twice the size of the training set.
For MNIST with 500 images, \#parameters is 1M. However, our optimizable parameters only include the parameters of the generator. In our implementation, it is only 0.18M, which is significantly lower than those in [1] and [2].
Therefore, we conclude that we are addressing a more challenging task than [1] and [2] do, and we are doing so with a more constrained budget in terms of #optimizable parameters.
#### **Comparisons with [3] Li, 2022 and [4] Wang and Torr, 2022**
We thank the reviewer KxBC for mentioning [3] and [4].
[3] is to generate an image $x$ given a pre-trained classifier $\Phi$ and a category $y$ according to
$$
x = \arg\min_{\hat{x}} L(\Phi(\hat{x}),y) + \beta||\hat{x} - \bar{x}||,
$$
where $L$ is the cross-entropy loss, and the regularization term is the norm between the generated sample $\hat{x}$ and training data mean $\bar{x}$,.
[4] is to generate N images $x^1, \cdots, x^N$ that belong to category $y$ given a pre-trained classifier $\Phi$ according to
$$
x^1, \cdots, x^N = \arg\min_{\hat{x}^1, \cdots, \hat{x}^N} \sum_{i=1}^N L(\Phi(\hat{x}^i),y) + \beta_1 L_{div}(\hat{x}^1, \cdots, \hat{x}^N) + \beta_2 L_{dist}(\hat{x}^1, \cdots, \hat{x}^N),
$$
where $L$ is the cross-entropy loss, $L_{div}$ is a similarity penalty, and $L_{dist}$ constrains the generated samples and the training data to have the same mean and variance in the feature space.
1. **Technical Aspect.** To generate images, [3] and [4] require multiple gradient descent steps to directly optimize each pixel of every image. Each gradient optimization step entails one forward pass and one gradient propagation through the classifier. By contrast, once our generator is trained, images can be generated by only a single forward pass through the generator. This results in two differences: (1) for the generation of each image, [3] and [4] possess a higher computational complexity and take longer to process; (2) [3] and [4] have more optimizable parameters and more expansive parameter spaces.
2. **Prior Information.** [3] and [4] use prior information about the data for generation. Specifically, the mean used in [3] and the mean and variance used in [4]. However, our method solely relies on a pre-trained classifier and does not use the prior information of the data.
Therefore, we conclude that these methods have a different setting. We are dealing with a more challenging task, and our solution uses a lower complexity.
[1] Haim et al, 2022. "Reconstructing Training Data from Trained Neural Networks".
[2] Buzaglo et al, 2023. "Reconstructing Training Data from Multiclass Neural Networks".
[3] Li, 2022. "Use Classifier as Generator".
[4] Wang and Torr, 2022. "Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs".
Pdf: /pdf/9f248120269c558628dd60e3f27cbe426163d495.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization | Accept (poster) | Summary: This paper proposes an iterative bi-level offline RL algorithm that separates an "inner level" and "outer level" optimization for reinforcement learning. A novel variant of model-based optimization (including task decomposition and task embedding) is proposed to enable flexible test-time adaptation. Experiments on popular d4rl benchmarks show state-of-the-art performances over other recent baselines.
Strengths: This paper proposes an approach that incorporates model-based optimization into offline rl with test-time adaptation. Each component in the methodology is well justified. This reformulation of offline rl as model-based optimization seems novel and can be of value to the community.
Adequate comparisons have been made to prior iterative and non-iterative offline rl methods, and the differences between the proposed method and related works are well justified.
Systematic evaluations on popular D4RL benchmarks are included, and statistically significant improvement over other state-of-the-art methods is observed.
The weakness of increased computation cost is discussed.
Weaknesses: The writing of the paper (especially that of the abstract and introduction) is hard to follow. Specifically, the terms "inner level" and "outer level" are very confusing. Using other terms such as "value estimation" and "policy extraction" or the like will make reading much easier. The three questions Q1, Q2, and Q3 raised in the abstract and introduction are also very vague and do not make sense until I have carefully read the methodology and experiments. In my opinion, the main contribution of this paper is reformulating offline rl as a model-based optimization problem and it makes flexible adaptation to offline states possible.
I am missing a comparison to related works in offline rl with test time adaptation. In particular, a comparison with [x] should be discussed.
In Figure 5, is it possible to also include the inference time of other baselines (COMs, RvS-R, etc)? How do Onestep and DROP compare when they use the same amount of computation?
As discussed in the limitations section, the number of subtasks can be important for this method. However, in offline rl when online data is difficult to collect, we are not allowed to tune hyperparameters using online interactions [y]. The procedure of determining hyperparameters should be included in the paper.
[x] Offline RL Policies Should be Trained to be Adaptive (https://arxiv.org/abs/2207.02200)
[y] Conservative Objective Models for Effective Offline Model-Based Optimization (https://arxiv.org/pdf/2107.06882)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: In Table 2, which variant is the listed DROP? DROP Best, Grad, Best-Ada, or Grad-Ada?
When comparing DROP with LAPO in Table 2, why are the results of LAPO on u, um-p, and ul-p missing? How is the mean calculated with missing entry?
In Figure 5, the inference time of Diffuser is reported, is it possible to also report its performance in Table 2 or Figure 4?
In line 226-228, "Different from this iterative paradigm, DROP only evaluates values of behavior policies in the inner-level optimization, avoiding the potential overestimation for values of new learning policies and eliminating the error propagation between the two levels." I do not quite understand why the iterative process can cause error propagation for potential overestimation of values, if you consider the iterative process as a min max optimization process? And there is an additional overestimation of MBO introduced by this paper.
I am willing to increase the score if the weaknesses and questions can be adequately addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of the number of subtasks and how to split subtasks are adequately discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the insightful comments.
**(1) the terms "inner level" and "outer level", and the raised three questions.**
Thank you for your valuable suggestion. We will include it in our revision. Thank you very much!
**(2) Comparison to related works in offline RL with test time adaptation.**
We thank the reviewer for raising the related work. We refer the reviewer to the global response for the comparison.
**(3) In Figure 5, is it possible to also include the inference time of other baselines (COMs, RvS-R, etc)?**
We believe that it is not very necessary to include the inference time of COMs or RvS-R because they only perform inference at s_0 (the initial state of a test rollout). The subsequent test time steps do not require any additional inference about the context variables. Therefore, we did not include their inference time in Figure 5.
**(4) How do Onestep and DROP compare when they use the same amount of computation?**
Thank you for this suggestion. We implement this comparison by searching for a time interval that makes DROP's inference time consistent with Onestep, and then we compare Onestep and DROP+CVAE (with the searched inference interval, DROP-Grad-Ada implementation). We show the experimental results in the following table (D4RL *-v2). We can see that overall, with the same amount of computation, DROP still performs better than Onestep, except for the tasks walker2d-medium-expert and walker2d-medium-replay.
| | Onestep | DROP |
| ---- | ---- | ---- |
| umaze | 64.3 | **75.0**$\pm$2.3 |
| umaze-diverse | 60.7 | **66.2**$\pm$1.9 |
| antmaze-medium-play | 0.3 | **33.2**$\pm$3.6 |
| antmaze-medium-diverse | 0 | **38.1**$\pm$2.5 |
| antmaze-large-play | 0.3 | **21.5**$\pm$1.6 |
| antmaze-large-diverse | 0 | **28.9**$\pm$3.1 |
| walker2d-medium-replay | **66.4** | 63.5$\pm$1.5 |
| hopper-medium-replay | 77.3 | **78.8**$\pm$1.7 |
| halfcheetah-medium-replay | 38.4 | **42.0**$\pm$1.9 |
| walker2d-medium-expert | **111.8** | 102.7$\pm$1.7 |
| hopper-medium-expert | 81.4 | **100.2**$\pm$2.3 |
| halfcheetah-medium-expert | 77.0 | **89.1**$\pm$1.8 |
**(5) procedure of determining hyperparameters (the number of subtasks).**
In our experiments, we take the number of subsets as a hyperparameter and choose it with typical hyperparameter tuning strategies. Thank you for pointing this out. We will clarify this in our revisions.
We also point out that our DROP+CVAE treats each trajectory as a single subtask and can achieve better performance. Thus, in the real world, the algorithm designer does not need to worry much about the number of subtasks, just take each trajectory as a single subtask (and use the DROP+CVAE implementation).
**(6) In Table 2, which variant is the listed DROP?**
We use DROP-Grad-Ada in Table 2.
**(7) When comparing DROP with LAPO in Table 2, why are the results of LAPO on u, um-p, and ul-p missing?**
The main reason is that LAPO doesn't release their source code, and our private reproduction was not as good as the results in their paper (there may be some details we didn't notice). So we reported their results directly and then left it open if LAPO did not report their results in the paper.
To provide a reference for the reviewer, we provide our reproduction results (antmaze *-v2) in the following table.
| | umaze | umaze-diverse | medium-play | medium-diverse | large-play | large-diverse |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
LAPO (in its original paper, *-v1) |– |91.3 |– |**85.7** |– |61.7 |
LAPO (our implementation, *-v2) | 86.5 $\pm$ 1.5 |80.6$\pm$2.1 |68.5$\pm$1.8 |79.2 $\pm$ 1.8| 48.8$\pm$ 2.6 |**64.8**$\pm$ 3.0 |
DROP (*-v2) |**90.5**$\pm$2.4 |**92.2**$\pm$1.7 |**74.1**$\pm$3.9 |82.9$\pm$3.5 |**57.2**$\pm$5.5 |63.3$\pm$2.4 |
**(8) results of Diffuser.**
Thank you for pointing this out. We will include the Diffuser results in table 2 in our revision.
**(9) why the iterative process can cause error propagation for potential overestimation of values**
The main reason is that naively performing policy evaluation (Equation 1 in the main paper) may query the estimated Q^k(s', a') for actions that are far outside the static offline data, resulting in pathological value Q^{k+1}(s, a) with large error. Such an iterative process (iterative policy evaluation and improvement) will further cause the inferred policy \pi_{k+1}(a|s) to be biased towards OOD actions with erroneously overestimated values. Thus, performing policy evaluation and improvement iteratively leads to the (potential) policy/value errors being propagated iteratively, which in turn leads to collapsed performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering the questions and providing the new results. With the new results, I am now convinced by the advantage of DROP over OneStep with the same computation budget. I am also convinced by the novelty of the proposed algorithm and its possibility of introducing MBO as a general framework in offline RL. However, from the general response, it seems the the improvement of DROP over previous methods such as APE-V[1] and CCVL[2] are not very clear as acknowledged. It also seems that the method is a bit complicated with many hyperparameters that need to be tuned such as number of sub-tasks and the dimension of information bottleneck, so I'm not sure how we can tune those hyperparameters in real-life situations in the absence of a simulator. Therefore, I have decided to keep my original score of 5.
[1] Ghosh, D., Ajay, A., Agrawal, P., & Levine, S. (2022, June). Offline rl policies should be trained to be adaptive. ICML 2022
[2] Hong, J., Kumar, A., & Levine, S. (2022). Confidence-Conditioned Value Functions for Offline Reinforcement Learning. ICLR 2023
---
Reply to Comment 1.1.1:
Title: Thank you for the response.
Comment:
We thank the reviewer for the response. We are glad to see that the reviewer acknowledges the novelty of the proposed algorithm and its possibility of introducing MBO as a general framework in offline RL. Thank you!
**1. comparison to APE-V [1] and CCVL [2].**
Regarding the relationship of DROP to APE-V [1] and CCVL [2], our approach in algorithm design is quite different from theirs: DROP is a non-iterative offline algorithm (which naturally avoids error propagation and exploitation), however, APE-V and CCVL use an iterative paradigm. Despite the fact that we both have the advantages of test-time adaptation, a clear benefit of our approach is that the OOD issues for offline RL are naturally solved in the non-iterative paradigm, while APE-V and CCVL require to be built on top of other offline RL methods to eliminate OOD problems, i.e., APE-V is based on Q-ensemble and CCVL utilizes the anti-exploration bonus.
Therefore, in comparison to APE-V and CCVL, our algorithm is simpler and easier to implement.
**2. application in real-life tasks.**
We provide further clarification regarding the reviewer's concern about the implementation in real-life tasks.
We note that the reviewer's main concern is the selection of the number of sub-tasks and the dimension of information bottleneck (dim(z)), and we acknowledge that our naive DROP implementation does have this worry. However, we must point out that `our DROP+CVAE implementation is totally innocent of such concerns`, and at the same time works better than naive DROP.
+ The number of sub-tasks: DROP+CVAE treats each trajectory in the offline data as a separate task, and thus one does not need to manually tune the number of sub-tasks.
+ The dimension of information bottleneck: we experimentally found that DROP+CVAE is in fact robust to dim(z), and a simple choice of 5 can lead to good performance on a large number of tasks. Therefore, the size of z is not worth being a concern for the practical application of DROP+CVAE.
Therefore, in practice, one can just use the DROP+CVAE implementation, which completely eliminates the hyperparameter concerns mentioned by the reviewer and at the same time works better.
We would like to thank the reviewer for the thoughtful comments. Please let us know if there are any concerns preventing you from raising your score.
[1] Ghosh, D., Ajay, A., Agrawal, P., & Levine, S. (2022, June). Offline rl policies should be trained to be adaptive. ICML 2022
[2] Hong, J., Kumar, A., & Levine, S. (2022). Confidence-Conditioned Value Functions for Offline Reinforcement Learning. ICLR 2023 | Summary: The paper proposes a non-iterative offline RL algorithm, in which the policy is trained in a bi-level optimisation process. As opposed to commonly known iterative algorithms, the authors propose to split the optimisation into an inner loop training to mitigate any OOD issues, and an outer loop optimisation for reward maximisation. The paper compares the proposed algorithm on a subset of the D4RL benchmark datasets with prior iterative offline RL algorithms and demonstrates new state-of-the-art performance.
Strengths: I think the proposed way of thinking about offline RL is still relatively new and a logical, very promising way forward. It seems wasteful to not make use of the information presented to the policy online (which is what many offline algorithms do since after training the policy stays fixed no matter what happens) - moving a part of the optimisation into this outer loop thus makes a lot of sense to me. A particularly new element to me is that the value that the final policy conditions on is not something handcrafted, but a learned representation of the potential behaviour policies. The combination with MBO is also a very interesting concept that I am unaware of being used in prior offline RL methods. I thus consider the method to score well in terms of novelty. I also believe that in the future we will see a lot more algorithms operating in a similar fashion, at least in the regard that part of the optimisation is moved to the online / outer loop.
The paper provides meaningful experiments and statistical analysis thereof: The algorithm is evaluated on maze as well as robotic locomotion tasks from the D4RL suite and the authors report measures beyond mean performance, such as probability of improvement and IQM performance. They also perform an analysis of the computational burden.
Weaknesses: While I very much like the idea, I personally found the framing with the three questions and answers rather confusing. E.g. the questions are framed differently in abstract and introduction and leave room for lots of interpretation. For example, What should we pay attention to when exploiting the transferred information in the outer-level optimization? -> The question lacks a goal (i.e. what should we pay attention to ... in order to achieve WHAT?, otherwise the answer could be anything).
Also, the term "non-iterative, bi-level optimisation" is used very often. It is obvious what is meant by bi-level (inner and outer are described), but it is at least a little ambiguous what is meant by "non-iterative". I'm sure DROP also has iterations in some sense (i.e. mini batches during the learning of the conditional behaviour policy) - I think what you mean is that there is no iteratively learned value function which bootstraps off of itself and thus propagates errors further and further. Is that right? Please clarify.
The answer to question 3 is that the outer loop allows using the online data to adapt the policy. This is intuitive, but I would say an experiment to really show this is lacking. DROP is compared with some iterative baselines and shows better performance, but this could be due to other differences as well. What I would have liked to see would be something like:
- do everything like DROP, but try to put the outer-loop directly into the optimisation
- e.g.: perform optimisation in z-space for all states in the dataset and distill the resulting global policy model + conditioning values into a fixed model
- then you could isolate the difference that the outer-loop actually makes
The authors mention and qualitatively compare prior "non-iterative" algorithms (RvS, onestep, COMs, F-BC), yet do not include them in their empirical evaluation, which seems a little odd since one could assume they have in a way similar qualities and it should be interesting what benefit the newly proposed non-iterative method has over prior methods of this type.
Further, it seems to me that some prior works are missing in this context, i.e. there exist already offline RL algorithms that condition on something which can then adaptively change the policy behaviour by altering the value that it conditions on: In [1], the policy conditions on a level of confidence, in [2] on a balance between conservatism and more liberal behaviour, and in [3] it conditions on a belief over the MDP dynamics, which is updated based on each online state-action pair that is processed. I realise these are all rather recent, but I think they could at least be part of the qualitative comparison.
[1] Hong, J., Kumar, A., & Levine, S. (2022). Confidence-Conditioned Value Functions for Offline Reinforcement Learning. ICLR 2023
[2] Swazinna, P., Udluft, S., & Runkler, T. (2022). User-Interactive Offline Reinforcement Learning. ICLR 2023
[3] Ghosh, D., Ajay, A., Agrawal, P., & Levine, S. (2022, June). Offline rl policies should be trained to be adaptive. ICML 2022
Most weaknesses should be rather easy to address. What I am most concerned about is more relevant baselines (prior non-iterative algorithms, the missing ones that I mentioned, or the experiment proposed to actually show the merit of the outer-loop).
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: How is the subset of D4RL datasets chosen? Why did you not use all of the available locomotion ones, e.g. medium-expert and medium-replay?
Why are there standard deviations only for DROP, but not for the other algorithms? How is the probability of improvement then calculated - is it whether the other methods mean is outside DROP's 95% CI or is it wither both 95% CI's don't overlap?
What do you mean exactly with non-iterative?
Why are the non-iterative prior works not part of the empirical evaluation?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 4 excellent
Limitations: Limitations are discussed & I enjoyed reading them since I also thought about these points while reading the paper.
potential negative societal impacts are not discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you very much for the insightful comments!
**(1) description of the three questions and answers.**
Thank you for the suggestion, we will clarify this in our revision of the paper.
**(2) term: "non-iterative, bi-level optimization". I think what you mean is that there is no iteratively learned value function that bootstraps off of itself and thus propagates errors further and further.**
Yes, you are right. We will clarify this in the introduction.
**(3) comparison to a "distilled" DROP implementation, i.e., moving the outer-loop into the optimization and distilling the learned policy + conditions into a single fixed model.**
Thank you for such a valuable suggestion! As you suggested, we have performed new experimental comparisons under the DROP+CVAE implementation (performing inference with DROP-Grad-Ada). We can see that such a "distilled" DROP implementation achieves competitive performance, approaching DROP's overall score (though still slightly below DROP). It really is a good proposal!
| | DROP | "distilled" DROP |
| ---- | ---- | ---- |
| walker2d-random-v2 | **5.2** $\pm$1.6 | 4.5 $\pm$0.8 |
| hopper-random-v2 | **20.8** $\pm$0.3 | 18.9 $\pm$0.2 |
| halfcheetah-random-v2 | **32.0** $\pm$2.5 | 27.8 $\pm$1.54 |
| walker2d-medium-v2 | 82.1 $\pm$5.2 | **84.2** $\pm$2.4 |
| hopper-medium-v2 | **74.9** $\pm$2.8 | 67.1 $\pm$2.1 |
| halfcheetah-medium-v2 | **52.4** $\pm$2.2 | 50.9 $\pm$1.7 |
| sum | **267.4** | 255.4 |
**(4) The authors mention and qualitatively compare prior "non-iterative" algorithms (RvS, Onestep, COMs, F-BC), yet do not include them in their empirical evaluation.**
We refer the reviewer to Figure 4, where we provide the empirical comparison to these baselines.
**(5) comparison to recent works [1,2,3].**
We thank the reviewer for raising these related works. We refer the reviewer to the global response for the comparison.
[1] Confidence-Conditioned Value Functions for Offline Reinforcement Learning.
[2] User-Interactive Offline Reinforcement Learning.
[3] Offline rl policies should be trained to be adaptive.
**(6) Why did you not use all of the available locomotion ones?**
This is mainly due to page limitations, so we present only some of the comparative experiments in the main text, and include more results in the appendix.
**(7) standard deviations for the other algorithms?**
Thank you for pointing this out. The mean of most of the baselines is outside of DROP's 95% CI. We will include this in our revision. Thank you very much.
**(8) What do you mean exactly with non-iterative?**
We know that offline RL is prone to exploiting OOD state-actions and producing over-estimated values, which makes vanilla *iterative* policy/value optimization challenging. To eliminate the problem, a number of methods propose to 1) introduce a policy/value regularization in the iterative loop or 2) try to eliminate the iterative loop itself. The term “non-iterative” refers to the paradigm of eliminating the iteration loop.
**(9) Why are the non-iterative prior works not part of the empirical evaluation?**
The main starting point is that we have already compared DROP and non-iterative methods in Figure 4, so we did not list the non-iterative results in Table 2. Thank you for raising this point. We will include it in our revision.
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I thank the authors for their detailed response, which has helped to increase my understanding of the manuscript.
A few questions remain:
On Q3: If you say that the performance of the distilled DROP is almost as good as the full version, doesn't that mean the "outer loop during testing" isn't as important to performance as initially proposed in your paper? I see the distilled version is slightly worse, so a little advantage appears to persist, but not as much as one would have thought. How do you interpret these new results?
On Q7: I realise you cannot pose revisions now, but could you provide the standard deviations in a table here?
---
Reply to Comment 1.1.1:
Comment: Thank you for the response!
**1. Doesn't that mean the "outer loop during testing" isn't as important to performance as initially proposed in your paper?**
It should not be seen that way. Actually, this "distilled" DROP is still an implementation of DROP: even if we move the policy improvement phase to the training phase, it is still a non-iterative offline RL method, and the outer-loop still exists. The main difference between the vanilla DROP and the "distilled" DROP is that: the vanilla DROP only conducts policy improvement on the samples encountered during the test (deployment phase) and then outputs the corresponding actions; the "distilled" DROP actually conducts policy improvement on all samples in the training data, and then distills the improved policies into a fixed policy. Therefore, there is still an outer-loop in the "distilled" DROP method, just that this outer-loop is moved to the training phase, not saying that the outer-loop is not important. The "distilled" DROP implementation is still the non-iterative offline RL method we advocated.
**2. The standard deviations.**
In the table below, we provide a performance comparison of the models on the Gym task. We also add new experimental results on the (Walker2d/Hopper/Halfcheetah) expert domains. (For a fair comparison, we take the baseline results from the LAPO paper.)
| |PLAS|LAPO|CQL|IQL|DROP|
|---|---|---|---|---|---|
| Walker2d-random-v2 | **9.2** $\pm$ 0.3 | 1.3 $\pm$ 2.1 | -0.2 $\pm$ 0.1 | 5.4 $\pm$ 0.4 | 5.2 $\pm$ 1.6 |
| Hopper-random-v2 | 6.7 $\pm$ 0 | **23.5** $\pm$ 0.6 | 8.3 $\pm$ 1.3 | 7.9 $\pm$ 1.7 | 20.8 $\pm$ 0.3 |
| Halfcheetah-random-v2 | 26.5 $\pm$ 0 | 30.6 $\pm$ 0.2 | 22.2 $\pm$ 1.4 | 13.1 $\pm$ 0.8 | **32** $\pm$ 2.5 |
| Walker2d-medium-v2 | 75.5 $\pm$ 10.6 | 80.8 $\pm$ 0.8 | **82.1** $\pm$ 6.3 | 77.9 $\pm$ 2.5 | **82.1** $\pm$ 5.2 |
| Hopper-medium-v2 | 51 $\pm$ 4.3 | 51.6 $\pm$ 3.3 | 71.6 $\pm$ 10.3 | 65.8 $\pm$ 5 | **74.9** $\pm$ 2.8 |
| Halfcheetah-medium-v2 | 44.5 $\pm$ 0.4 | 46 $\pm$ 0.3 | 49.8 $\pm$ 0.5 | 47.8 $\pm$ 0.8 | **52.4** $\pm$ 2.2 |
| Walker2d-expert-v2 | 109.6 $\pm$ 0.6 | **112.3** $\pm$ 0.1 | 108.8 $\pm$ 0 | 110 $\pm$ 0.3 | 110.8 $\pm$ 2 |
| Hopper-expert-v2 | 107 $\pm$ 11.6 | 106.8 $\pm$ 3.7 | 102.3 $\pm$ 7.5 | 109.4 $\pm$ 2.3 | **113** $\pm$ 3.9 |
| Halfcheetah-expert-v2 | 93.8 $\pm$ 7.4 | 95.9 $\pm$ 0.2 | 87.4 $\pm$ 20.3 | 95 $\pm$ 0.8 | **99.7** $\pm$ 2.6 |
| sum | 523.8 | 548.8 | 532.3 | 532.3 | **590.9** |
---
We would like to thank the reviewer for the thoughtful comments. Please let us know if there are any concerns preventing you from raising your score. | Summary: The paper tackles offline reinforcement learning, and considers two classes of algorithms — iterative (at each step, policy evaluation and policy improvement is done sequentially) and non-iterative. The main contribution of the paper is to move the part of policy improvement from offline training to during test-time. This can increase the computational time of deploying the model, but seems to improve performance. The paper also make clever use of task decomposition and learning a low-dimensional task embedding to improve their results.
Strengths: 1. The idea is novel and very interesting.
2. The experimentations of the paper is thorough and organized, even though potentially some ablations can be added.
3. The paper is also written well, has high clarity and readability.
Weaknesses: 1. The final algorithm has many moving parts and components, and can feel to be over-engineered.
2. Ablation studies are not sufficient. One can potentially add ablations for effects of the hyper-parameters, and for example, it would be very important to see how the number of subtasks affect the performance of the algorithm.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. How is the task decomposition done for appendix section B.1? I.e., which of the three decomposition rules from B.2 is used? In general, more information about the task decomposition in the main paper would be appreciated.
2. How is the sequential sampling done for Rank(N, M) task decomposition? My guess is first M trajectories (with highest returns) become task 1, the next M trajectories become task 2 and so on.
3. Appendix line 048, “rank decomposition rule leverages more high quality trajectories” → what defines the quality of a trajectory? If quality = return of the trajectory, then wouldn’t some subtasks have lower quality in rank decomposition due to having low return trajectories only?
4. Could we have an ablation to see the effect of N on the performance of the algorithm? For example, the onestop algorithm [1] does not have a sub-task level decomposition. Maybe checking if there is a certain threshold for N, below which the onestop algorithm performs better, would be interesting. In other words, I am curious about how much improvement is coming from task divisions, if that is one of the main driving force for the given algorithm’s effectiveness.
5. Also is the computation time affected by N?
6. What is the advantage of using task embeddings $z$ instead of directly fitting a MBO on the collected data?
7. In table 2 (main paper), the umaze environment is antmaze-umaze or maze2d-umaze? Also the version of the environments should be included in the main paper.
[1] David Brandfonbrener, Will Whitney, Rajesh Ranganath, and Joan Bruna. Offline RL without off-policy evaluation. Advances in Neural Information Processing Systems, 34:4933–4946, 2021.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive comments.
**Q1: which of the three decomposition rules from B.2 is used?**
We use Rank(N, M) in the main paper. See Line 035 in the appendix. We will clarify it in our main paper. Thank you.
**Q2: Rank(N, M) rule: My guess is first M trajectories (with the highest returns) become task 1, the next M trajectories become task 2, and so on.**
Yes, you are right!
**Q3: Appendix line 048, “rank decomposition rule leverages more high-quality trajectories” → what defines the quality of a trajectory? If quality = return of the trajectory, then wouldn’t some subtasks have lower quality in rank decomposition due to having low return trajectories only?**
Yes, you are right. It is true that some subtasks may contain only low-return trajectories. The main motivation is that we expect to build *a diverse set of task distributions*. Then trajectories of different qualities will have different latent embeddings, which will benefit contextual policy learning and optimal embedding inference (test-time adaptation).
**Q4: an ablation to see the effect of N on the performance.**
Good suggestion. We report the ablation results in the following table.
We can see that when N is very small, DROP basically performs worse than Onestep, proving that when the number of subtasks is small, updating z when doing outer-level optimization does not bring as much benefit as updating the policy directly (Onestep). A general trend can be observed that DROP becomes progressively better as N increases and outperforms Onestep for some values, suggesting that optimizing for low-dimensional z (out-level inference) brings more benefits at these points. However, we also note that larger N is not always better, and that performance degradation can occur when N is too large. We speculate that this is due to the fact that when N is too large, the learning (of the corresponding behavioral policies and Q-values) is underfitting and instead leads to worse performance in the end.
| | 1 | 2 | 5 | 8 | 10 | 20 | 50 | 100 | Onestep |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| hopper-medium-replay-v2 | 65.9| 67.6 | 82.0| 79.5 | **87.4** | 87.0 | 85.9 | 77.8 | 77.3 |
| halfcheetah-medium-replay-v2 | 33.8| 35.7| 34.8| 35.8| 34.1| 39.5 | **40.3** | 38.6 | 38.3 |
| walker2d-medium-replay-v2 | 53.8| 57.9| 60.6| 59.7 | 61.9 | 59.9| 60.6| 56.9 | **66.4** |
**Q5: Is the computation time affected by N?**
In fact, N has a negligible effect on the inference time.
**Q6: What is the advantage of using task embeddings instead of directly fitting an MBO on the collected data?**
The main advantage of explicitly using task embedding is that we can perform test-time adaptation by exploiting the sequential structure of RL tasks, rather than simply performing inference at the beginning of the test rollout (which is what we do when we fit a simple MBO to the collected data). Empirically, we also find that test-time adaptation can yield better results than fitting a simple MBO (i.e., baseline COMs).
**Q7: the version of the environments in Table 2.**
All results are on the "v2" version of the datasets, except for the results of LAPO on antmaze, which are on the "v1" dataset. To unify the versions of the dataset used, we re-implement the LAPO algorithm (not released by the authors) and report the results. We refer the reviewer to question (7) in the authors' response to reviewer SsTF for our reproduced results. We will include the new results in our revision of the paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for carefully considering my concerns and trying to address them!
**Further questions**:
(1) **Related to Q4**: an ablation to see the effect of N on the performance.
Would it be possible to repeat the experiment results for multiple seeds, and report an error bar? (Maybe for the purposes of the rebuttal, the authors do not need to test for so many values of N). It seems that for halfcheetah-medium-replay-v2, the values for N=20, 50,100 is very similar to Onestep, and without a proper error bar, bolding the N = 50 value to indicate superior result does not seem okay.
(2) **Related to Q5**: Is the computation time affected by N?
What is the computation time during training/learning the task embeddings with respect to N? In general, between Onestep [1] and DROP on similar tasks, how would the training time scale?
(3) **Related to Q3**: Appendix line 048, “rank decomposition rule leverages more high-quality trajectories” → what defines the quality of a trajectory? If quality = return of the trajectory, then wouldn’t some subtasks have lower quality in rank decomposition due to having low return trajectories only?
Thanks to the authors for their answer. However, this still does not clarify why line 048 says “rank decomposition rule leverages more high-quality trajectories” if some subtasks have lower quality in rank decomposition. What do the authors mean when they say "... leverages more high-quality trajectories". A comparison of the average/standard deviation of subtask trajectory quality between the three rules would clarify this issue. I imagine that might be out-of-scope for this paper, in which case changing this statement appropriately would be important.
(4) **Appendix C: Best hyperparameters**
I thank the authors for reporting the hyper-param search grid in the appendix. However, in addition, would it be possible to report the best hyper-params for each environment? This would help reproduce the paper's results efficiently.
(5) **Appendix C: Cost of hyper-param tuning**
One of the strong points of Onestep [1] was its robustness to hyper-params. Could the authors provide a brief discussion on all the hyper-params that Onestep needs to tune, and all the hyper-params DROP needs to tune? It seems, from Appendix C, that on D4RL environments, a default choice of hyper-params do not work for all environments. The hyper-param tuning strategy mentioned also seems pretty extensive, with a two step strategy. A more thorough investigation on the robustness of DROP to hyper-parameter choice would be appreciated.
(6) **Appendix C: Step 2 of hyper-param tuning**
Is having access to a simulator for hyper-param tuning during offline training a valid assumption?
(7) **Additional question 1**: Have the authors considered using something other than CVAE to model offline data and multiple-modes? For example, [2] discusses transformers to model multiple modes in offline data. Any reason for choosing CVAEs in particular?
[1] Offline RL Without Off-Policy Evaluation, https://arxiv.org/abs/2106.08909
[2] Behavior Transformers: Cloning modes with one stone, https://proceedings.neurips.cc/paper_files/paper/2022/file/90d17e882adbdda42349db6f50123817-Paper-Conference.pdf
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment:
We thank the reviewer for the response. Below we address your further questions one by one.
**(1) Would it be possible to repeat the experiment results for multiple seeds, and report an error bar?**
Thank you for the suggestion. We provide the results (with 4 seeds) in the following table.
| | 1 | 2 | 5 | 8 | 10 | 20 | 50 | 100 | Onestep |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| hopper-medium-replay-v2 |64.1 $\pm$2.4 |68.7 $\pm$1.4 |82.6 $\pm$2.1 |79.7 $\pm$1.2 |**86.6** $\pm$1.2 |85.1 $\pm$0.8 |84.9 $\pm$1.1 |78.3 $\pm$2.3| 77.3 |
| halfcheetah-medium-replay-v2 |35.7 $\pm$1.3 |34.8 $\pm$0.5 |35.4 $\pm$1.5 |36.0 $\pm$1.2 |36.5 $\pm$2.3 |39.2 $\pm$2.1 |**42.5** $\pm$2.5 |39.1 $\pm$1.3| 38.3 |
| walker2d-medium-replay-v2 |52.1 $\pm$2.1 |56.0 $\pm$0.8 |61.3 $\pm$1.4 |58.7 $\pm$2.1 |63.1 $\pm$1.8 |60.5 $\pm$2.9 |60.2 $\pm$2.7 |57.2 $\pm$2.0| **66.4** |
**(2) Is the computation time affected by N?**
At inference (test-time adaptation), we record the average computation time of DROP when varying the size of sub-tasks (N). We can observe that N has a negligible effect on the inference time (since we can do parallel computations).
| average inference time (s) during testing | N=10 | N=50 | N=100 | N=200 | N=500 | N=1000 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| time interval = 10| 0.017| 0.0171| 0.0171| 0.0171| 0.0173| 0.0174|
| time interval = 20| 0.0108| 0.0109| 0.0109| 0.011| 0.0111| 0.0112|
| time interval = 50| 0.0088| 0.0087| 0.0087| 0.0088| 0.0091| 0.0091|
| time interval = 100| 0.0073| 0.0075| 0.0075| 0.0076| 0.0078| 0.008|
**(3) “rank decomposition rule leverages more high-quality trajectories.”**
Thank you for pointing out this. Indeed, such a statement is not very rigorous, and we will revise it in our revision. Thank you!
**(4) Would it be possible to report the best hyper-parameters for each environment?**
Thank you. We refer the reviewer to Table 5 (in the appendix) for the best hyper-parameters for each environment. We will clarify it in our revision.
**(5) A more thorough investigation on the hyper-parameter choice would be appreciated.**
Thank you for the suggestion. Compared to other offline RL methods, it is true that our naive DROP implementation introduces an extra hyper-parameter, the number of subtasks, due to the task decomposition step. However, we point out that our DROP+CVAE does not need to additionally tune this hyper-parameter at all. This is because DROP+CVAE treats each trajectory in the offline data as a separate task. Meanwhile, DROP+CVAE is also robust to dim(z) (the dimension of information bottleneck), and a simple choice of 5 can lead to good performance on a large number of tasks. Compared to other offline RL algorithms, training DROP+CVAE does not introduce any additional burden on hyperparameter selection.
Therefore, DROP+CVAE implementation can eliminate the additional worry regarding the hyper-parameters, and it works better compared to naive DROP. In real-life offline RL tasks, one can just use the DROP+CVAE implementation.
**(6) Is having access to a simulator for hyper-parameter tuning during offline training a valid assumption?**
Yes. Such a setting for hyper-parameter tuning is consistent with most offline RL papers.
**(7) Any reason for choosing CVAEs in particular?**
The main reason for such a choice is the ability of CVAE models to yield compact and low-dimensional embeddings (robust and widely adopted), making them suitable for deriving test-time adaptation in our non-iterative paradigm.
Paper [1] mentioned by the reviewer does focus on multi-modal data, which however assumes such data is expert. Therefore, there is no explicit policy improvement, which is the fundamental difference between [1] and DROP. Of course, it is absolutely feasible to replace the MLP policy architecture with a transformer model. Thanks to the review for suggesting this, we will discuss it further in our revision.
[1] Shafiullah, N. M., Cui, Z., Altanzaya, A. A., & Pinto, L. Behavior Transformers: Cloning k modes with one stone. NeurIPS 2022.
We would like to thank the reviewer for the thoughtful comments. Please let us know if there are any concerns preventing you from raising your score. | Summary: This paper proposes a method, namely DROP, to adapt the policy during the inference time. The authors achieve this by dividing the optimization process into two phases. In the first phase, the authors train a contextual behavior policy, a score model, and a deterministic task embedding model. At the second phase, the authors utilzie the score model for adapting the policy execution, i.e., follow the optimal embedding. The authors belives their proposed method can better achieve "stitching" across the states in the offline data. They provide some experiments on D4RL datasets to show the benefits of their proposed method.
Strengths: (a) this paper is well-structured and easy to follow
(b) the figures are nice and helpful for the readers to understand the claims from the authors
(c) it is good to see the authors consider statistical uncertainty and significance in the paper. I believe this is quite important to RL fields
Weaknesses: (a) I have several concerns on the ideas presented in the main text
- the adopted method shares many similarities with CQL, i.e., the authors utilize a CQL-style optimization objective in Equation 8. The regularization term is hence not novel. The differences are, CQL penalizes actions, while DROP penalizes the embedding $z$.
- though the author claim that they learn a *score model*, the learnt score function $f$ is actually ***action-value function***, but conditioned on the task embedding. This can be observed in Eqaution 8, where the score model is updated via bellman error. This is highly similar to [1, 2]. Both [1] learns a value function conditioned on the confidence $\delta$, while [2] learns the value function conditioned on the evolving belief. These two papers are relevant to this paper, while they are not cited in this paper. I believe they are important baselines to compare against. Specially, [1,2] can also achieve the claimed *test-time adaption* by adjusting the confidence or the belief. Actually, this has already been achieved by [2]. To realize this in [1], perhaps another objective function is needed (or simply by maximizing the confidence conditioned value function).
- the authors cite RoMA [3], but do not involve it as a baseline. I can tell that RoMA is also strongly related to the topics covered in this paper, and RoMa seems to exhibit better performance than COMs
- the generated embeddings seem to serve as *goal* for the learnt policy. I believe some goal-conditioned and return-conditioned algorithms ought to be included **numerically** as baselines. The authors compare their DROP against IQL, CQL, PLAS, LAPO while none of them are goal-conditioned or return-conditioned algorithms. The authors provide IQM comparison against decision transformer, RvS-R, while more advanced and stronger methods are expected
- Meanwhile, the comparison in Table 1 is misleading. These criterias are manually proposed by the authors. The success of DROP in answering A1-A3 do not necessiarily indicate that DROP is better than prior methods
- the proposed method, DROP, seems quite redundant. It requires to first split the offline dataset into some subsets, and then learn the embedding network and the contextual behavior policy upon it. Additionally, one needs to learn a score model, and finally utilizes this score model for querying the optimal embedding during inference. It is unclear whether there is a need for dividing the datasets into some subsets and learn embeddings on them. The authors provide a design choice comparison concerning on which decomposition rule is better, while the corresponding analysis on whether decomposition itself is needed is missing. The authors write that "suggests that fitting a single behavior policy may not be optimal to model the multiple modes of the offline data distribution" (line 125-126), but the authors themselves choose to learn one single contextual behavior policy instead of multiple policies conditioned on the task embedding. Since the behavior policy is unique, why do we still need to split the datasets? For example, why not the authors directly receive $s,a$ as inputs to train $\phi$. I believe this eliminates the need of dividing the datasets. Also, the reviewer does not think that training loss is a good indicator to show that multiple policies can better characterize the offline datasets (as presented in appendix B.1).
[1] Confidence-conditioned value functions for offline reinforcement learning
[2] Offline rl policies should be trained to be adaptive
[3] Roma: Robust model adaptation for offline model-based optimization
(b) no theoretical understandings are provided either in the main text or in the appendix. I do not want to blame the authors too much on this point. However, it is unclear whether the socre function will overestimate even with the introduced regularization, especially during deployment (i.e., whether poor embedding is provided even if we optimize the score function at inference time). It is also unclear how large should we penalize the embedding. One of the advantages of CQL is that it theoretically answers that one ought to use a large $\alpha$ to enforce conservatism. The authors comment that the inference can be unstable if the number of subtasks is large. I believe the instability is due to the wrong output embedding. It will be better if the authors can theoretically characterize the relationship between the embedding and the number of subsets.
(c) I also have the following concerns on the experiment section
- in Table 2 of main text, some baseline results are lower than their typically reported ones in prior work, e.g., CQL
- in Table 2 of main text, the results of LAPO on antmaze are on "v1" version datasets, and I can tell that the results of IQL on antmaze are from "v2" datasets. It is unclear whether the results of baselines and DROP are also conducted on "v2" datasets, since the authors do not specify the version of datasets in the main text. This is huge problematic.
- in Table 2 of main text, it is unclear which variant of DROP is used to report the results, given the fact that the authors introduce four variants of DROP in section 5
- I can see that the authors use MuJoCo "v0" datasets in Figure 2 while "v2" in Table 1 in the appendix. You should unify the versions of dataset used.
- in line 72, the authors write that "achieves better performance compared to state-of-the-art offline RL algorithms". However, the compared baselines are NOT state-of-the-arts, especially on MuJoCo datasets. The authors ought to use this term carefully.
- the superior performance of DROP are acquired via a very careful tuning of hyperparameters. Based on Table 5 in the appendix, the authors carefully search for the optimal number of subsets $N$ for each dataset. This value varies across different datasets, and may lies in a large range (e.g., 1000 subsets on halfcheetah-medium-expert, but 50 subsets on halfcheetah-medium-replay). The choice of the number of subsets is quite weird. For example, the halfcheetah-medium-expert dataset only contains two modes, while DROP requires 1000 subsets to learn a contextual behavior policy. Meanwhile, it is unclear how this value affects the performance of DROP practically. The authors say that "we find that when the number of sub-tasks is too large, the inference is unstable", but how unstable is it? I believe this is a quite important hyperparamter, and its parameter study ought to be included. The need of searching the best number of subsets decreases the practicality of DROP
- based on the table 3 in the appendix, the authors use a large batch size 1024 for DROP. As far as the reviewer can tell, the baseline algorithms all use a batch size of 256. A recent work show that a larger batch size is helpful for offline RL learning [4], hence I believe the comparison of DROP against other methods is not fair
- it is difficult for the user to find the best time interval when deploying DROP in practice, and the best values may differ across different scenarios
[4] Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size
(d) It is unclear whether the proposed method can be applied in the real-world applications, where we may require a fast response from the policy. Since DROP requries test-time adaption, I think at the current stage it is not suitable to be deployed in the real-world robots. Moreover, in real-world scenarios, we actually expect the robots to be able to adapt their policies during deployment, as it is highly possible that the robots encounter the unseen scenarios (e.g., different landforms) and need to adapt the policies. I do not see the potential of using DROP to mitigate these challenges. The generalization capability of DROP to unseen scenarios is limited.
(e) Minor issues
- the format of appendix seems to be another venue instead of NeurIPS
- appendix A contains nothing
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (a) this paper does not seem to involve any theoretical analysis, why do you choose `yes` to `theory`?
(b) it seems the performance of DROP can be largely affected by the performance of the behavior policy, can it be improved if we train the contextual behavior policy in a IQL-style manner, i.e., weighted behavior cloning
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors discuss several limitations of their work, and the reviewer personally agrees with that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **About Weaknesses:**
**(a): concerns about the ideas.**
+ *the difference between CQL and DROP:* The main difference is that DROP performs non-iterative bi-level offline RL optimization, while CQL performs iterative bi-level offline RL optimization (see Figure 1 in the main paper).
+ *comparison to [1, 2, 3] and more advanced algorithms:* Please refer to the global response for the comparison.
+ *goal-conditioned methods ought to be included numerically:* Thank you for the suggestion. We will include it in our revision.
+ *stronger methods are expected:* See CCVL in the global response.
+ *criteria in Table 1:* First, in Appendix B.5, we have conducted an ablation study to examine the impact of the conservative regularization used to learn the score model (Q2A2), and find that removing such regularization leads to unstable performance, thus demonstrating the effectiveness of Q2A2. Second, in Figure 3, we also demonstrate the effectiveness of answering Q3A3, i.e. that performing adaptive policy inference can produce better results.
+ *concerns on the task decomposition rule:* First, we point out that the purpose of performing task decomposition is to establish a link between offline RL and MBO, thereby analyzing different offline RL methods from a unified perspective. Second, the purpose of performing task decomposition (using returns) is to provide a fair comparison with previous approaches (i.e., RvS) in the non-iterative paradigm, thereby demonstrating the superiority of DROP's design (by answering Q1Q2Q3). Third, DROP can treat each trajectory as a single task, as in our DROP+CVAE implementation (lines 296-306), which does not require complex task decomposition. Our experiments also show the effectiveness of DROP. Thus, the algorithm designer does not need to worry much about the choice of decomposition rule in deployment, just take the DROP+CVAE implementation (treating each trajectory as a single subtask).
**(b) whether the score function will overestimate even with the introduced regularization**
If we assume that the problem (overestimation, raised by the verification) occurs, there is actually an implicit advantage of DROP: the overestimation occurs on top of the inference to z, not on top of the actions. That is, even if the inference yields an OOD z, the contextual behaviour policy still has the ability to produce in-distribution actions. We have also done an ablation study on this in Appendix D (lines 383-428) and find that the learned behaviour policy can protect against the shift of the inferred embeddings. Similarly, the same benefit (implicit OOD regularisation) is also observed in the PLAS paper. Thus, we believe that there is no need to be overly concerned about DROP's overestimation.
**(c) concerns on the experiment section.**
+ *CQL results:* Our CQL results in Antmaze domain are taken from the RvS paper and results in Gym domain are taken from the LAPO paper.
+ *Environment version in Table 2:* All results are on the "v2" version datasets, except the results of LAPO on antmaze, which are on the "v1" dataset. To unify the dataset versions used, we re-implement the LAPO algorithm (not released by the authors) and report the results. We refer the reviewer to question (7) in the authors' response to reviewer SsTF for our reproduced results. We will include them in our revision of the paper.
+ *which variant of DROP in Table 2:* DROP-Grad-Ada.
+ *unify the versions of the dataset used:* Thank you for the suggestion. We will unify it in our paper revision.
+ *searching the best number of subsets:* In our experiments, we take the number of subsets as a hyperparameter and choose it with typical hyperparameter tuning strategies. In response to the review's concern about algorithmic complexity, I don't think it's worth worrying about. Because in the implementation we can treat each trajectory as a separate task, similar to the DROP+CVAE implementation in the paper. This simple implementation can give better performance, and at the same time, the algorithm designer does not have to worry much about the decomposition rule.
+ *large batch size:* We respectfully point out that the issue raised by the review in paper [4] does not make sense. Paper [4] does emphasize the importance of large batch size, but this paper is still based on Q-ensemble (eliminates OOD problems). Our approach to OOD problems has nothing to do with this paper (and Q-ensemble ideas). Large batch size can improve the robustness of model training, and we respectfully don't think we need to force all offline RL work (and even future works) to set 256 sizes for the sake of "fairness".
**(d) concerns on real-world applications.**
We note that the reviewer's main concern is that real-world deployment requires real-time inference and that this is time consuming. We think the reviewer is overly concerned about this. First, even if our z is fixed (updated only at s_0), we have found experimentally that DROP still outperforms the RvS and decision-transform baselines. Second, we do not actually need to run inference every time. We found that we can still get good results by running inference every 10 steps. We tested the inference time on a very early RTX 2080 Ti, which only takes about 0.01 seconds to infer. In particular, in many of today's robotic arm tasks, the speed of the arm's movement is quite slow, so 0.01 seconds will basically have no effect. Thirdly, our model only uses the MLP network, and the inference speed is much slower than that of transformer/diffusion based methods. Therefore, we do not believe that inference time will be an obstacle for DROP in real applications.
**About Questions:**
**(a) Why do you choose yes to theory?**
We apologize for any confusion about this. We will correct it.
**(b) train the contextual behavior policy in an IQL-style manner**
We thank the reviewer for pointing out this option. We refer the reviewer to the global response for the comparison.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer YuS3
Comment: Thanks for your rebuttal. After carefully reading the paper (and appendix) again, reading the comments from other reviewers and the corresponding rebuttal from the authors, I decided to keep my current rating of 3. Please find the comments and suggestions below
**On the method**
- **The regularization term is not novel**. The authors utilize a CQL-style optimization objective. Though the authors reply that *DROP performs non-iterative while CQL performs iterative bi-level offline optimization*. Their differences are minor
- **The learnt score function is action-value function $Q$ in nature**, but additionally conditioned on the task embedding
- **Stronger goal-conditioned and return-conditioned methods ought to be included as baselines**, e.g., Diffuser and its variants
- **On the task decomposition rule**. The authors reply that *DROP can treat each trajectory as a single task, which does not require complex task decomposition*. This raises the following concerns: (a) the sampled starting points can be large, making the test time adaptation of DROP quite slow (b) the authors write in line 346 that **when the number of sub-tasks is too large, the inference is unstable**. It is unclear how unstable it is and whether DROP+CVAE can *absolutely* address this issue (c) many trajectories have similar returns (they should have similar task embedding), and $z$ can be diverse if the dataset is large
- **DROP introduces too many hyperparameters**. I still believe DROP is redundant and introduces too many critical hyperparameters, e.g., number of subtasks, inference time intervals, learning rate in Eqn 9, Lagrange threshold, dimension of embedding, etc. Finding a balance between these parameters is hard when the simulator is absent
- **Potential overestimation in score function**. The authors reply that *even if the inference yields an OOD z, the contextual behavior policy still has the ability to produce in-distribution actions*. It is understandable since the contextual policy is trained via behavior cloning. However, that does not indicate that the potential overestimation does not count in DROP. In Appendix D Figure 6, the performance of DROP drops drastically on halfcheetah-medium and hopper-medium without regularization term. That being said, **the overestimation in score function can significantly affect the performance of DROP**. It is natural since the task embedding $z$ is output by querying the score function, and wrong embedding can be fed into the contextual policy if the score function overestimates. Also, one can see in Table 6 in the appendix that DROP fails in relocate-expert dataset and has a poor performance on hammer-expert dataset. I think the reasons can also be attributed to the overestimation in score function. These altogether make the practicality of DROP doubtable and limited.
**On the clarity of the paper**
- the authors ought to unify the versions of the dataset used and specify what variant of DROP they use to prevent any misunderstanding. MuJoCo -v0 datasets can be removed as they have bugs. DROP has too many variants and some of them can be discussed in the appendix
- the authors should run CQL with d3rlpy implementation then
- given the potential benefits of large batch size, it is better to use a batch size of 256
- no theoretical analysis is included. From a pure engineering perspective, it is okay that one searches for the best number of subsets, or simply treat each trajectory as a subtask, while from a Neurips paper of this kind, I expect to understand why DROP works with a chosen number of subtasks. Theoretical analysis can make this paper stronger
- the speed can be an obstacle in real applications if we treat each trajectory as a subtask and the dataset is large
- the authors show in Appendix B.1 that multiple behavior policies deliver better resilience to characterize the offline data than a single behavior policy. However, they train a single contextual policy in DROP and this gap ought to be further clarified
**Suggestions**
- include the comparison and discussions against APE-V, CCVL, RoMA in the later version
- include a numerical comparison against some goal-conditioned and return-conditioned algorithms
- I understand that the authors want to show the advantages of DROP over prior methods, while answering Q1 Q2 Q3 (e.g., the authors write in line 190 that *However, both F-BC and RvS-R leave Q2 unanswered*) do not necessarily indicate that prior methods are flawed
- LAPO has an official codebase, the authors ought to use that instead of your implementation
- parameter study on the number of subtasks is recommended
- IQL-style manner means that you can update the contextual policy via $\mathbb{E}[\exp(k A(s,a,z))\log\beta(s|a,z)]$, $k$ is the inverse temperature, $A$ is the advantage. To that end, you need to train an extra value function $V(s,z)$ (no regularization on score function is needed then)
I hope that the authors can find some of them helpful
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the detailed suggestion. I would like to make the following clarifications.
**1. regularization term.**
We did not emphasize that this is our contribution. We just used it in our implementation. Our main contribution (novelty) is to propose a non-iterative offline RL paradigm from the perspective of MBO (as pointed out by Reviewers 7FJj, hhjh, and SsTF).
**2. The learned score function is an action-value function $Q$ in nature, additionally conditioned on the task embedding.**
Yes, you are right.
**3. It is unclear how unstable it is and whether DROP+CVAE can absolutely address this issue (unstable) many trajectories have similar returns (they should have similar task embedding), and $z$ can be diverse if the dataset is large.**
The source of instability is that the naive DROP implementation additionally introduces a one-hot coding process, which leads to unstable learning if the number (dimension of one-hot) is too large. However, DROP+CVAE does not introduce any additional one-hot encodings and learns the task embeddings directly through CVAE, hence `its learning is stable`. Also, since CVAE explicitly introduces a Gaussian prior for bounding the learned embeddings, `there will be no such a problem as mentioned by the reviewer` that if the data is too large $z$ will be very diverse.
**4. Hyper-parameters on 1) the number of subtasks, 2) inference time intervals, 3) the learning rate in Equation 9, 4) the Lagrange threshold, and 5) the dimension of embedding.**
+ 1). DROP+CVAE does not need to manually select the parameter of the number of subtasks.
+ 2). An inference time interval setting of 10 can achieve a balance between model performance and inference time on most tasks at the same time.
+ 3). The learning rate in Equation 9 does not need to be set separately, and it is fine to keep it consistent with the model training learning rate.
+ 4). The Lagrange threshold is a hyper-parameter. We choose it with the standard hyper-parameters selection rules. Empirically, we did not observe a significant difference in performance across a range of values. Therefore, we simply set it as 2 (see details in Implementation Details in the appendix). Therefore, our model is robust against this parameter.
+ 5). In experiments, we didn't spend much time tuning the dimension of $z$ (CVAE’s learning is stable), and a simple setting of 5 on most tasks is sufficient.
In summary, all these hyper-parameters mentioned by the reviewer `are not bottlenecks affecting the deployment` of the algorithm in real-life tasks.
**5. Potential overestimation in score function.**
We would like to emphasize that there is a potential overestimation on some tasks, so we introduced regularization (over the score function) to eliminate this overestimation problem. `Besides, through ablation studies, we also found that this regularization does eliminate potential overestimation`. At the same time, we would like to point out that we did not emphasize that it (the regularization) is our contribution. Regularization is useful for eliminating overestimation in offline RL, and thus we introduced it in our method design.
**6. Regarding clarity and suggestions.**
Thank you very much for your suggestions to improve the quality of this paper. We sincerely accept all your suggestions. At the same time, we note that the current rebuttal/discussion phase of NeurIPS does not support modifying submissions, and we will incorporate all your suggestions into our revision. Meanwhile, we note that the modification does not involve the core contribution of our paper, and we hope you are satisfied with the main contribution of this work.
---
We would like to thank the reviewer for the thoughtful comments. Please let us know if there are any concerns preventing you from raising your score. | Rebuttal 1:
Rebuttal:
**(1) comparison to prior adaptive baselines.**
1) The following table shows a comparison with APE-V [1] in the D4RL suite. The APE-V results are taken from the original paper, which uses SAC-N as the base offline RL implementation. Despite the comprehensive approach we have taken, it is important to acknowledge that the performance of our DROP does not generally outperform the specific baseline APE-V [1]. Nevertheless, we believe that this result provides valuable insights for future research directions on MBO solutions for offline RL problems.
| | APE-V [1] | DROP |
| ---- | ---- | ---- |
| halfcheetah-random | 29.9 $\pm$ 1.1 | 32.0 $\pm$ 2.5 |
| halfcheetah-medium | 69.1 $\pm$ 0.4 | 52.4 $\pm$ 2.2 |
| halfcheetah-medium-expert | 101.4 $\pm$ 1.4 | 102.2 $\pm$ 1.5 |
| halfcheetah-medium-replay | 64.6 $\pm$ 0.9 | 50.9 $\pm$ 1.6 |
| hopper-random | 31.3 $\pm$ 0.2 | 20.8 $\pm$ 0.3 |
| hopper-medium | | 61.5 $\pm$ 3.7 |
| hopper-medium-expert | 105.7 $\pm$ 3.7 | 107.2 $\pm$ 1.5 |
| hopper-medium-replay | 98.5 $\pm$ 0.5 | 96.3 $\pm$ 2.5 |
| walker2d-random | 15.5 $\pm$ 8.5 | 5.2 $\pm$ 1.6 |
| walker2d-medium | 90.3 $\pm$ 1.6 | 82.1 $\pm$ 5.2 |
| walker2d-medium-expert | 110.0 $\pm$ 1.5 | 109.3 $\pm$ 0.4 |
| walker2d-medium-replay | 82.9 $\pm$ 0.4 | 83.5 $\pm$ 1.2 |
2) As CCVL [2] only experiments on the discrete action Atari games and does not test its performance on the common continuous control tasks of D4RL. For comparison, we also deploy our DROP method on 4 offline Atari games. We present the results in the following table (using an initial 10\% of the replay dataset after 12.5M gradient steps). We can see that our DROP outperforms CCVL in 2 out of 4 tasks.
| | CCVL [2] | DROP |
| ---- | ---- | ---- |
| Asterix | 7576.0 $\pm$ 360.2 | 6517.9 $\pm$ 564.4 |
| Breakout | 121.4 $\pm$ 10.3 | 139.5 $\pm$ 25.0 |
| Pong | 13.4 $\pm$ 6.1 | 10.7 $\pm$ 8.2 |
| Seaquest | 1211.4 $\pm$ 437.2 | 1358.1 $\pm$ 352.0 |
3) Regarding the paper [3] mentioned by the reviewer, we think it is difficult to make a fair experimental comparison. The main reason is that paper [3] asks the user to adjust the policy behaviour after training, which is too subjective and therefore we cannot make a fair comparison. We will cite this paper in our paper and explain the connection between us. Thank you again for bringing up this valuable related work.
[1] Ghosh, D., Ajay, A., Agrawal, P., & Levine, S. (2022, June). Offline rl policies should be trained to be adaptive. ICML 2022
[2] Hong, J., Kumar, A., & Levine, S. (2022). Confidence-Conditioned Value Functions for Offline Reinforcement Learning. ICLR 2023
[3] Swazinna, P., Udluft, S., & Runkler, T. (2022). User-Interactive Offline Reinforcement Learning. ICLR 2023
**(2) comparison to RoMA.**
As suggested by reviewer YuS3, we also make a comparison with RoMA [4]. In the implementation, we take the parameters (neural network weights) of the behavioural policies as the design input and the behavioural scores as the outputs. In the pre-training phase, we also use Gaussian noise for input perturbation; in the inference phase, we perform parameter inference (outer-level optimisation) with 200-step updates. The comparison results are shown in the following table. We can see that RoMA improves the performance of COMs and can outperform our DROP in 1 out of 3 tasks.
| | COMs | RoMA | DROP |
| ---- | ---- | ---- | ---- |
| halfcheetah-medium-replay | 41.4 | 56.8 $\pm$ 2.9 | 50.9 $\pm$ 1.6 |
| hopper-medium-replay | 49.7| 61.2 $\pm$ 2.0 | 96.3 $\pm$ 2.5 |
| walker2d-medium-replay | 33.9 | 67.4 $\pm$ 4.1 | 83.5 $\pm$ 1.2 |
[4] Yu, S., Ahn, S., Song, L., & Shin, J. (2021). Roma: Robust model adaptation for offline model-based optimization. NeurIPS 2021.
**(3) train the contextual behavior policy in an IQL-style manner.**
We note that IQL essentially weights the behavior policy with Q-values, and also note that its policy is not a contextual policy, but rather a simple policy that inputs states and outputs actions. In this paradigm, we believe that a reasonable way to understand the review's proposal (IQL style) is to first perform a policy improvement for each contextual policy, and then distill multiple improved contextual policies into a single fixed policy. In the implementation, we first use the score function to optimize z (implicit policy improvement), and then distill the improved contextual policy into a fixed policy, in the same spirit as the Q-weighted BC in IQL. The following table shows the comparison results. We can see that this method ("distilled" DROP) is able to match DROP on walker2d-medium-v2 and halfcheetah-medium-v2, but it still performs worse than our non-iterative DROP implementation overall.
| | DROP | "distilled" DROP |
| ---- | ---- | ---- |
| walker2d-random-v2 | **5.2** $\pm$1.6 | 4.5 $\pm$0.8 |
| hopper-random-v2 | **20.8** $\pm$0.3 | 18.9 $\pm$0.2 |
| halfcheetah-random-v2 | **32.0** $\pm$2.5 | 27.8 $\pm$1.54 |
| walker2d-medium-v2 | 82.1 $\pm$5.2 | **84.2** $\pm$2.4 |
| hopper-medium-v2 | **74.9** $\pm$2.8 | 67.1 $\pm$2.1 |
| halfcheetah-medium-v2 | 52.4 $\pm$2.2 | **53.9** $\pm$1.7 |
| sum | **267.4** | 258.4 | | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Moral Responsibility for AI Systems | Accept (poster) | Summary: In this paper the authors put forward a formalised definition of moral responsibility that can be applied to AI systems. To do so, they compare and contrast how existing definitions fare with regard to a causal and an epistemic condition for responsibility. Using these contrasting cases, the authors argue for a form of counterfactual ‘Necessary Element of a Sufficient Set’ definition of causation, and combine two existing definitions of the epistemic condition.
Strengths: I should note that I am less able to assess the technical aspects of the paper, and so I focus on the conceptual aspects.
Originality: The paper advances a novel formalisation of responsibility, which is contrasted clearly and well with existing definitions of the causal and epistemic conditions for responsibility.
Quality: As I’m not less able to assess the technical aspects of the paper, I do not believe that I am well-placed to comment on the technical quality. Conceptually, the paper is strong.
Clarity: The paper is well-written. (Very minor, but after each example large sections of text are left in italics. This makes it difficult to read and to distinguish between the example and the discussion of the example.)
Significance: Both the causal and epistemic conditions for responsibility are debated in the wider literature, where a challenge is to capture important nuances in formalisations. If successful, the contribution is significant.
Weaknesses: I do not feel competent to comment on the technical substance of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My questions are general conceptual questions, not ones that necessarily need to be addressed.
At the end, the authors indicate that the quantified approach taken could be applied to collective responsibility. How do the authors anticipate the approach dealing with the epistemic condition regarding collective responsibility, in particular? What about institutional agents and responsibility?
Blame and praise are inherently most value-infused than a causal notion of responsibility. How would the authors proceed to capture these aspects, and is it necessary?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and encouraging feedback!
The reviewer raises a good point that the epistemic condition in particular poses some challenges once we generalize to a notion of collective responsibility. This issue is not novel though, as research in social epistemology (and in social choice theory) also focusses on collective epistemic states, so in first instance we could consult this literature for inspiration. One option would be to integrate communication between the agents into our framework, which definitely seems realistic within the context of AI systems (just think of autonomous vehicles, or swarms of drones, for example). Thus, the various agents can coordinate their actions, and in this sense be treated more or less as if they form a single agent, somewhat similar to how the law treats corporations. But these are merely initial thoughts on the subject, we are definitely open to suggestions on this topic.
Regarding blame and praise: as we mention in our paper, we take our definition to at least form a necessary condition for blame and praise. We offer one line of thought on how to construct a definition of blame in our reply to reviewer ir7c, when discussing how the recent work "A Causal Analysis of Harm" could be combined with our approach to form a definition of blame. (We add a bit more detail in our reply to reviewer ir7c.) A natural further suggestion would be to do the same for the notion of benefit, giving a definition of praise. We would need to integrate further conditions than merely harm and benefit though, because in and of themselves those do not deal with the issue of how to compare and weigh off multiple outcomes (think of trolley cases, the doctrine of double effect, etc.), or on how to integrate the "cost" that an agent incurs by performing an action, nor does it fully capture what duties and obligations an agent might have. In other words, generalizing to blame and praise would be an exciting but far from trivial avenue to pursue.
Lastly, we apologize for the long sections of italics, as we now realize they are not very readable, we will correct this in the final version. | Summary: This paper formalizes the notion of responsibility of an agent by using
contrastive necessary element of a sufficient set causation (CNESS) for the
causal condition and Halpern and Kleiman-Weiner's notion epistemic
responsibility. The resulting formulation of responsibility benefits from the
strong alignment of CNESS with intuitions of causation/responsibility vis-a-vis
simple NESS and Halpern and Pearl-causation. The paper extends responsibility
from a binary variable to real-valued degree of responsibility.
Strengths: ## Originality
## Quality
- `[major]` Incorporates the strengths of multiple components to formulate an effective formalization of responsibility.
- `[minor]` Formulation allows for degrees of responsibility.
## Clarity
- `[minor]` Examples illustrate the differences between theories of causation and epistemic responsibility.
## Significance
Weaknesses: Although the formulation of moral responsibility is sound on philosophical
grounds, the paper does not make a strong case for how this applies to AI.
Thus, this paper seems out-of-scope for NeurIPS.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: How is formulation of responsibility particularly well-suited for AI? What
impacts could it hav?
## Comments
- `Example 2` The paragraphs of italicized text are a bit distracting.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: If arguing for a definition of moral responsibility, it is important to at
least briefly touch on the societal impacts of the work vis-a-vis the
contributions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We must say that we were very surprised to read this review. The reviewer recommends a strong rejection, and yet they fail to point out a single flaw in our paper. (For completeness, we should point out that their summary does contain a flaw, as we do not use HK's epistemic condition but rather present a novel condition of our own.) Moreover, the entire basis for their verdict lies in their comment that they fail to see how responsibility is relevant to AI, and thus they judge our paper out of scope for NeurIPS.
Is the reviewer not convinced that the ethics of AI is a crucially important topic to do research on, and thus deserves a place at the most important conference on AI? Because if so, then we must firmly disagree (and as reviewer 7Pui outlines in their review, there is a wealth of literature within AI to back this up): we believe that addressing the ethical challenges posed by the use of AI are of vital importance and require much more rather than less research, given that the stakes could not be higher. Moreover, we are confident that this is also the predominant view nowadays within the AI community, and we are therefore at pains to understand the reviewer's succinct and dismissive judgment of our paper.
We develop a notion of responsibility that is formulated using nothing but causal models and probabilities, both of which are well-established formalisms within AI, and one that only requires a very minimal notion of agency to be applicable, thus making it well-suited for the kinds of artificial agency that one could attribute to even rather limited AI systems. We see this as well within the scope of NeurIPS, and we therefore kindly ask the reviewer to explain why they believe otherwise.
---
Rebuttal Comment 1.1:
Title: Ethics is important but paper does not apply it to AI
Comment: I have read the author rebuttal.
> The reviewer recommends a strong rejection, and yet they fail to point out a single flaw in our paper. Moreover, the entire basis for their verdict lies in their comment that they fail to see how responsibility is relevant to AI, and thus they judge our paper out of scope for NeurIPS.
I did not find any major flaws in the paper _per se_, and I think that the paper makes good contributions which are well supported---in the context of philosophy.
I do not "fail to see how responsibility is relevant to AI", but I fail to see how this paper contributes to the bridge between the philosophy of responsibility and AI.
A contribution along these lines would do more to show how the proposed formulations of responsibility tangibly connect to the practice of AI.
For example, one might argue how allowing for degrees of responsibility, autonomy, etc. are necessary for applying to AI since they present a greater range and variety of capability vis-a-vis humans.
Or another direction might be showing how the formulation of responsibility can be directly plugged into current ML paradigms.
Either way, a practitioner of deep learning should be able to walk away from this paper with a least some sense of how this notion of responsibility is to be applied to the practice deep learning.
So far as I can tell, the only steps that this paper takes in this direction is mentioning in the introduction that AI can fulfill the preconditions for responsibility.
My judgment, then, is that this paper would have to do significantly more "bridging" between its solid formulation of responsibility and the practice of AI to be relevant to NeurIPS.
> Is the reviewer not convinced that the ethics of AI is a crucially important topic to do research on, and thus deserves a place at the most important conference on AI?
I am convinced that this is crucially important, but the "of AI" part of this ethics paper does not feature strongly enough.
> We develop a notion of responsibility that is formulated using nothing but causal models and probabilities, both of which are well-established formalisms within AI, and one that only requires a very minimal notion of agency to be applicable, thus making it well-suited for the kinds of artificial agency that one could attribute to even rather limited AI systems.
If about 1/4 to 1/3 of the paper were spent discussing what is mentioned above (instead of just in the beginning of the introduction and conclusion), I think this paper would easily be a 7/10 or 8/10.
Since space is, of course, limited, some of the more weedy (but not unnecessary) philosophical issues could be bumped to the appendix---the main arguments would be intact for the general audience, and for the philosophically inclined, they could read on in the appendix.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for clarifying their reasoning, this is very helpful!
We agree that we could say more on the explicit connection to AI, and we are definitely willing to do so. (Note that we have an extra page in the camera-ready version, which helps with the space issue.) The instructions for the discussion period were to be brief in our responses, and therefore we here only mention at a very high level some preliminary thoughts on this. However, if the reviewer would like us to elaborate, we can certainly do so.
1: A good formal definition of responsibility _simpliciter_ is in and of itself highly relevant to having a good definition of responsibility for AI systems, since we would like the latter to be closely informed by the former.
2: By minimizing the role of the Control Condition, our definition requires only a very weak sense of agency, making it particularly suitable for AI systems. (In particular, we can connect our definition to the more pragmatic side of the debate on Artificial Agency.)
3: Causal models and probability are well-known to AI researchers, in contrast to many more philosophical and informal notions that are used in traditional definitions of responsibility, thus our definition can be more easily understood and applied by AI researchers than other definitions.
4: Our definition can easily be integrated into AI systems that are _already_ explicitly making decisions informed by a causal model. This allows for an AI system to itself reason about responsibility when making decisions. (Also, as mentioned in our reply to reviewer ir7c, it can be integrated together with "A Causal Analysis of Harm" (NeurIPS 2022), and this could be the starting point of a toolbox for ethical reasoning. Furthermore, they have a paper on "Quantifying Harm" (IJCAI 2023) and this sits well with our degree of responsibility, because -- as the reviewer points out -- degrees matter when comparing the kind of fine-grained decisions that an AI is capable of performing and reasoning about.)
5: More generally, a "regulatory" AI system that has a causal model of the salient moral factors in some domain of application can use our definition to evaluate _other_ AI systems that are operating within that domain, by first querying that system in an efficient way so as to extract a representation of the epistemic state that is informing the system (which is not trivial of course!), and then evaluate its decisions by using these two components and applying our definition.
We can offer more details on each of these issues if the reviewer so desires. | Summary: This paper presents a novel formal definition of moral responsibility tailored for AI systems, filling in both causal and epistemic conditions. The work effectively draws comparisons to BvH and HK's works, favoring the Counterfactual NESS definition of causation and a nuanced epistemic condition.
Strengths: The proposed formal definition of moral responsibility for AI systems is a novel contribution. The paper contrasts several existing theories of causation and epistemic conditions to make its case, demonstrating a thorough understanding of the existing literature. The submission is clearly written and well organized. The authors articulate complex concepts with clarity and a lot of interesting examples.
Weaknesses: There is a need for further elaboration on how this proposed concept can be practically applied and measured in real-world AI systems. Also, the work would be more convincing if it incorporated a broader range of philosophical perspectives on moral responsibility. Lastly, while the paper talks about future work related to defining blame and praise, these aspects are not explored in the current submission, leaving an incomplete picture of the potential implications of their framework.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like to see more discussions on the application of the proposed definition in real-world scenarios.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have acknowledged that future work could enhance the quantified approach by considering collective responsibility and degree of causation. However, practical application and testing of the proposed definition are yet to be performed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and their positive evaluation of our paper!
Regarding their question about real-world scenarios: we actually completely agree! We would also like to see more discussion of our definition to real-world scenarios, and we very much would like to offer this in future work. In particular, there is now a very large empirical literature in experimental philosophy and in psychology on people's actual judgments of responsibility and causation in many real-life scenarios. Although there is an abundance of empirical results, there is much disagreement on their interpretation. Concretely, there is disagreement on whether -- and how -- causal judgments can be separated from judgments of responsibility, on what the role is of the epistemic states of the agent (see in particular the recent work by Kirfel and Lagnado that we discuss in the supplementary material), what the role is of probabilities and background conditions, and on what the right view is on actual causation. In the supplementary material we only briefly sketch how our work could shed some new light on all of this. We would very much like to write a follow-up paper in which we apply our definition to this vast literature, because we are optimistic that it could offer a more principled interpretation than what is currently on offer.
Of course there also exist other kinds of real-world scenarios that we should look into, namely the kind of scenarios in which an AI system might realistically be deployed to make the kind of decisions to which our definition would apply. Autonomous vehicles and the oft-discussed trolley cases come to mind (although these would require extending our definition to include a focus on multiple outcomes), as well as the use of autonomous weapons and the moral requirements that are implied by principles of just war, to name just a few.
More generally, recently Beckers, Chockler, and Halpern have developed "A Causal Analysis of Harm" (NeurIPS, 2022), which could be integrated into our framework as well (except that we would choose to replace their definition of causation with the CNESS definition). At first thought, one might speculate that an agent who performs action A is blameworthy for some outcome O whenever:
- the agent in question is responsible for O in virtue of performing A (which is the focus of our analysis), and
- A caused harm to another agent, where the harm caused is due to A causing O (which is the focus of Beckers, Chockler, and Halpern).
Lastly, as we have outlined in our response to reviewer 7Pui, we completely agree that our definition focuses only on one kind of responsibility and should be placed within a broader landscape that discusses important related (but distinct) notions such as blame, accountability, liability, and others. We will be more explicit in the paper regarding the relation of our work to this broader landscape. | Summary: The paper aims to link causal and moral responsibility through formal modeling using structural causal models. Engaging with existing literature on causality, the paper argues for the benefits of certain definitions over others in introducing a 3-part _responsibility schema_ based on a control condition (that an agent had control over the causal action), a causal condition (that the controlled action caused the outcome), and an epistemic condition (the agent believed that their control choice minimized the probability of an alternative, less preferred outcome, thereby avoiding responsibility for the actual outcome). In support of this model, the paper shows several example scenarios that differentiate existing definitions that could be used to operationalize this schema.
Strengths: + The linking of causal and moral responsibility is useful and important.
+ Connecting reasoning about values to formal models is difficult but a promising research direction.
+ The paper engages deeply with existing definitions of causality and its relationship to agents' beliefs.
+ The paper is well organized in support of its contribution, the responsibility schema.
Weaknesses: - The paper's view of moral responsibility is not presented as substantially distinct from causal responsibility and the connection to human values or moral philosophy is extremely weak. This is disappointing given that there is a large literature on linking causal responsibility to the value of accountability _within the context of AI systems_ that has been developing for many decades and now constitutes one of the main lines of thought in two major venues (ACM FAccT and ACM/AAAI AIES), which are wholly ignored in the discussion and citations (see, e.g., various works on accountability of, e.g., Kroll and also various works on liability of, e.g., Selbst). I think this context should be acknowledged, at a minimum. If it's not relevant, that's acceptable but the paper should say why and mention it as an alternative approach.
- Although the paper aims to link causal and moral responsibility, it ignores several classical lines of thought attacking this question. For example, a classical question in law going back hundreds of years is the distinction between causal or moral responsibility and _liability_, the question of which agent the law should assign responsibility to (a theory in which many long-discussed cases directly contravene examples in the paper). Relating these ideas to computing systems, nearly 30 years ago, Nissenbaum's classic "Accountability in a Computerized Society" addressed the question of when causal and moral responsibility could be separated and why they often can't be _in computing systems specifically_. There is even a recent re-establishment of this argument by Cooper, Moss, Laufer, and Nissenbaum revisiting the framework in light of data-derived decision rules. Again, the fact of an alternative approach should inform the location in the literature of the paper's contribution. What is the relationship between the proffered responsibility schema and values such as accountability, regimes such as liability, or concepts such as moral responsibility distinct from causal responsibility?
- I think the paper, by sticking too closely to the details of definitions in the literatures of formal causal calculi and structural causal modeling, falls victim to some of the standard criticisms of this literature and specifically of tying its results to reasoning about human values. If the paper wishes to claim such a tie as a contribution, this context must also be reckoned with. For example, authors such as Hu, Kohler-Hausmann, Miller, and Kasirzadeh and Smart have in the last few years attempted to build theories that deal with multiple-causation and "structural causation" (here, structural in the sense that the cause is inherent to the shape of an entire system, rather than in the sense that the structure of the graph model informs which variables are or are not causally relevant). Such approaches resist reducing causation to even a set of variables or to sufficient conditions, and have proven useful in understanding things which are otherwise difficult, such as describing the plausibility of different counterfactuals. Again, there is an alternative approach in the literature and the contribution would be strengthened by acknowledging it and setting off the contribution in light of existing work. Not everything tangentially relevant has to be cited, of course, but the paper should make clear where its contributions lie and why other work with similar goals is different.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: * Where do the authors see their contribution and how do they set off alternative approaches in other literatures? What of other concepts such as accountability, liability, or answerability for outcomes?
* Is moral responsibility equivalent to causal responsibility in a modeled context? Why or why not?
* How does the offered model manage situations where causes aren't instrumental but rather structural and systemic? What of multiple-causation, both in general and in these contexts?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The work should overall be better contextualized around the full richness of the problem used to motivate it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed engagement with our paper, but we fail to understand why they recommend rejection. The core contribution of our paper is an analysis that is an improvement over two important existing analyses. As they do not point to any concrete flaws in our analysis, it appears the reviewer’s judgment is based mostly on the absence of a more detailed integration into the broader literature. Although we are unsure how we could do justice to such an integration in a short conference paper, we will add more context and references to better situate our contribution.
Questions:
Q1:
The reviewer is completely right that there is more to responsibility than what is discussed in our paper. Moreover, they are also right in pointing out the importance of many related concepts. The aim of our paper is therefore fairly modest, in the sense that it focusses _only_ on responsibility for an outcome in virtue of performing an action. Furthermore, we point out that our notion of responsibility in and of itself leaves open entirely whether the agent is thereby blameworthy or praiseworthy: instead the definition here presented is a necessary condition for such further judgments, but certainly not a sufficient one. Therefore our definition is in no way meant to offer an _exhaustive_ theory of all kinds of responsibility for AI systems, let alone a theory of accountability and other related notions. Rather, we see our work as filling in just one element of such a theory, which we believe can be integrated into many existing theories out there. (See below for more details.) We will be more explicit on this point.
Q2:
If causal responsibility means “being an actual cause”, then in our view — and for the notion of responsibility that we are focussing on! — such causal responsibility is a necessary condition for moral responsibility, for it is captured by our Causal Condition. We take up this view from the philosophy literature, and it is also well-supported by the law. Given that, on our view, moral responsibility also requires meeting the other two conditions, it is therefore not equivalent to causal responsibility.
Q3:
Our definition exclusively focuses on actual (aka token) causation, and exclusively on variables which represent the actions of an agent. Thus, at first sight, any responsibility that is owed due to there being structural or systemic causes (which are, we take it, not actual, and are not actions), is not covered by our definition. If the reviewer has in mind a particular kind of example where this seems problematic, we would be very much open to discussing it.
Regarding multiple causation: since the only causes that are relevant to our definition are the actions of agents, issues of multiple causation only show up in case of multiple actions. Our Assassins example in which both shoot can be seen as an instantiation of multiple causation. In the current paper we only judge their responsibility individually.
Remarks:
We do not see the connection to moral philosophy as being very weak, given that we are starting out from a responsibility schema that is derived directly from the standard literature in moral philosophy. We therefore disagree with the reviewer that this schema captures the essence of our contribution: our contribution lies in _filling in_ this schema in a manner that we argue is superior to existing accounts. Furthermore, we do not consider this to be an _alternative_ to the important work that the reviewer mentions: it is merely one piece of the full picture that should inform the moral evaluation of AI systems.
We are unclear as to what the reviewer means when they state that there exist classical lines of attack on the view that moral responsibility is closely connected to causation. (To reiterate, we make no claim to exhaustivity: we acknowledge that there exist notions of responsibility that are not so closely tied to causation.) They refer to work on liability, which is not the topic of our paper, and they refer to work by Cooper et al that seems to support the existence of a close connection. In fact, we looked into the article mentioned, and we believe our work could be very beneficial for the issues there identified. For example, they state that "Blame, defined in terms of causation and faultiness, is assigned to moral agents for harms they have caused due to faulty actions", which is entirely in line with our view (see also our reply to reviewer ir7c regarding harm). Later they say that "in practice, if a trained model causes harms, it can be extremely challenging to tease out particular actors who should answer for them." Our work is meant precisely to tease out those actors, and thus we see it as fitting perfectly within the challenges identified by Cooper et al. The reviewer also points to the work by Kroll, and there too we see a connection, as he explicitly focusses both on causal responsibility as well as on the agent’s ability to perform alternative, preferable, actions, as offering one important way to understand accountability. So we fail to see how our work is in conflict with the literature on accountability that the reviewer is referring to.
Lastly, our reliance on causal models is a consequence of their widespread use in the study of causation, both in AI as well as in philosophy. Despite their limitations, causal models are still the most popular (and successful, we might add) formalism for addressing causal issues. We would like to learn more as to which criticisms the reviewer has in mind in particular, because as far as we know, these criticisms mainly target counterfactuals and causation that involve causally suspicious variables such as race or gender, and do so in causal models that partly aim to capture the causal structure of society. Our approach does no such thing: we focus on mundane cases of actual causation (rather than structural or type causation), in which the actions of agents cause specific outcomes.
---
Rebuttal Comment 1.1:
Comment: I should say, I firmly disagree with the notion that a review must identify deficiencies within the four corners of the paper. When work is not sufficiently connected to surrounding literature or scopes claims incorrectly based on available ideas, that is a holistic assessment and requires seeing the work in context. I'm not asking that this be a survey, but rather that the contribution be contextualized and that, where the contribution differs from existing ideas, that this gap be justified.
I agree that engaging the causal modeling literature is valuable for the reasons the authors mention: it's the method that gets used in AI research and maybe also practice. I'm a bit confused by the authors' claims that the surrounding literature pointed out in the initial review remains irrelevant - engagement with this work in the comment _demonstrates_ its relevance. By "alternative", I meant less that these works provide distinct frameworks and more that other methods (e.g., critical scholarship/critical theory) are yielding related ideas and it is important to fit the ideas in this paper into the broader context, whether the related work comes to the same conclusions or not. I recognize that there are a lot of contexts the work could be placed within. My argument is that some deference needs to be given to all of them, but perhaps not to the same depth. So for example, it would be reasonable to set aside legal scholarship on liability, which is a huge area that probably isn't super relevant. Work applying ideas about causal and moral responsibility _to AI_ is more relevant, and should be relied on more heavily (this is the essence of the claim in the initial review). This is especially true given that the claim that the work can and should be much more closely tied together is part of the response to another review (UYHL). On the other hand, the work is deeply contextualized in the causal modeling literature, which I think is fantastic, although I'm not well equipped to evaluate the quality of this.
The comment suggests heavily that the related work comes to the same conclusion as this paper, but my point in the initial review was the opposite: although the comment claims otherwise, existing work *separates* causal and moral responsibility. The function of the example of liability - which I concur is not the focus of this work and probably shouldn't be - was to illustrate how different facets of accountability can be separated in practice and often are. Work on liability is relevant insofar as cases where different actors are causally vs. morally responsible vs. liable have been considered under the banner of liability and this is ignored in the work, which claims that causal responsibility is an antecedent condition for moral responsibility (if so, how could they be separated? How could an entity be liable without being causally or morally responsible, and if that's a mistake in the analysis what is wrong with existing work on liability?). This is what I mean by the connection to moral philosophy being weak - it's central to much work that these separations are possible, so rejecting them is a heavy lift, one I don't think is made by this paper.
I do think the claims are overbroad, partly for the reasons just above and even more because, when I challenged the applicability of the blanket claim to scenarios of structural causation, the authors responded that "our definition exclusively focuses on actual (token) causation". This is in conflict with the claim at 66 that the responsibility schema applies to "all definitions of responsibility that we aim to consider" (a quick review of the intro does not give a clear bound to the restriction at the end of the sentence, leaving me to conclude that the schema is meant to apply broadly). I still think the claims need to be narrowed, and in the service of contextualization, such narrowing could take the form of an argument that the schema applies only to situations where actual causation applies and other work demonstrates that this is not all or even many of the situations in which accountability questions apply to AI. So the claim should be narrowed somehow, consistent with the paper's "modest" aims.
I observe a gap between my claim in the review (that issues of structural/systemic cause are under-attended in the paper) and the response (that these are essentially asking the paper to consider type causation, which is distinct but perhaps also important?). The persistence of this gap furthers my view that the context of relating responsibility issues to AI is quite weak in the submitted version.
Indeed, the extent to which the application of the schema _to AI_ is critical to the contribution of the paper. I still believe that the project of "_filling in_ the schema" has to be justified in terms of how well the filling in applies to the specifics of the scenarios at issue (i.e., the structure of AI systems). Without engaging the context, it's hard to evaluate this.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their continued engagement with our paper.
"Identify": We never claimed that a review should identify deficiencies within the four corners of the paper, but merely that the absence of a single concrete deficiency is hard to square with recommending a rejection. In their response the reviewer still did not identify any deficiency with our analysis, instead the criticism is directed at the lack of contextualization with regards to related work. Therefore we would like to once again emphasize, as we did in our original rebuttal (and also in our rebuttal to reviewer UYHL), that we are perfectly willing to add more background context to better situate our definition within the literature and to be explicit about the scope of the notion of responsibility that we are focussing on.
"Relevance": We are unsure what the reviewer is referring to, as the word “irrelevant” does not appear in our rebuttal. With respect to the work that is critical of using socially constructed variables or ignoring structural causation, we merely meant to make clear that as far as we can tell, that criticism in no way applies to the kinds of variables and causal claims that we make use of in the paper, which is why we asked the reviewer to learn more about the specific criticism that they have in mind.
"deference": We entirely agree. As we tried to make clear in our rebuttal, we do not see our approach as being in conflict with the work mentioned in the review, but rather as offering one part of the bigger picture. Moreover, we tried to indicate how our view on responsibility fits quite naturally within two strands of work that the reviewer mentioned, that by Cooper et al., and that by Kroll.
"Separate": We likewise separate moral and causal responsibility, since causation is but one condition of our definition. As we mentioned in our rebuttal and illustrated through various citations, we failed to find arguments in the articles mentioned for the complete separation of causal and moral responsibility, but rather found support for the view that causation is an important component of responsibility. Moreover, when it comes to the philosophical literature on _moral responsibility for outcomes_, which is the focus of our work, causation is almost universally taken to be a necessary condition. (See our citations [5,18,13,3]. The only exception we are aware of is Sartorio, and her disagreement stems entirely from her disagreement with the causal verdicts reached in certain examples.) Concretely, this is a quote from the Stanford Encyclopedia of Philosophy entry on Moral Responsibility, which clearly implies that causation is taken to be a necessary but insufficient condition for the responsibility for outcomes: “Moral responsibility should also be distinguished from causal responsibility. … the powers and capacities that are required for moral responsibility are not identical with an agent’s causal powers, so we cannot infer moral responsibility from an assignment of causal responsibility. … morally responsible agents may explain or defend their behavior in ways that call into question their moral responsibility for outcomes for which they are causally responsible. Suppose that S causes an explosion by flipping a switch: the fact that S had no reason to expect such a consequence from flipping the switch might call into question his moral responsibility ... for the explosion without altering his causal contribution to it.’’
"Liability": As to the separation of liability and causal responsibility, all we can do is reiterate that our focus is on the moral responsibility for outcomes, and that we acknowledge the existence of different conceptions of responsibility. Responsibility is a single term that captures several concepts, and we do not disagree that some forms of liability (such as that which arises due to contractual agreements) do not require a causal connection between the responsible agent and some outcome.
"Actual causation": We fail to see the reviewer’s point: the causal condition in our schema involves particular events, which is per definition what actual causation is concerned with. All three of the NESS, the CNESS, and the HP definition, are examples of definitions of actual causation.
"Narrowed": As we tried to indicate in our rebuttal, we are definitely prepared to add more context to our paper so as to better delineate its scope, and be more explicit about the fact that the notion of responsibility is broader than moral responsibility for particular outcomes.
"Gap": We are unaware of work that discusses responsibility due to type causation, and we would be curious to hear more about this. As outlined in our reply to reviewer UYHL, we disagree that the connection to AI is weak. An AI system making particular decisions or performing particular actions that cause certain outcomes is a widely prevalent phenomenon, and it is to such scenarios that our definition applies. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift | Accept (poster) | Summary: This paper evaluates several BDL algorithm on real-world datasets (and real-world distribution shifts) in terms of generalisation (accuracy/correlation metrics) and uncertainty estimation quality (calibration/signed calibration).
Strengths: 1. I think there is a need for benchmarking BDL techniques, and the paper tries to tackle that problem.
2. The point about batch normalisation leading to really poor o.o.d. data performance is interesting, and makes sense. This could be emphasised more as a contribution of the work (if it is not already known, I'm not sure!).
Weaknesses: Overall, I vote for rejection. I am not convinced that this paper is of interest to the community, and ready for publication yet. However, I am open to changing my mind if I see evidence from the authors. I've included suggestions and questions for the authors that would enable me to raise my score. It's possible I misunderstood key details; I am more than happy to be corrected.
1. I do not think the paper makes a substantial enough contribution to be accepted at NeurIPS. The contribution of the signed calibration metric is nice but small. If there was a clear takeaway recommendation from the experiment results, that would be a useful contribution to the community, but as I can understand, there isn't one. Despite this, the experiment section is mostly descriptive and does not provide greater insight and understanding about when one method is better than another. Either making the argument that the experiment section makes clearer and using the results to provide greater insight and understanding would help me increase my score. For example, the result that Bayes-by-Backprop performs very well here is interesting and surprising to me. Why is this the case? VI usually underfits, so does VI regularise to the base model more strongly? Which would be good in the pre-trained setup? This result contradicts other work ([1]) where VI often performs poorly and underfits. Please explain the contradiction.
2. Missing reference to Nado et. al [1]. The need for baselines is clear, but what does this work offer over this work? I'm quite surprised that I didn't see the reference in the paper. For what it's worth, I think the Nado et al. baselines are also limited (only every evaluating marignal uncertainty, nothing about sequential decision making), so I think good work remains to be in this domain, but I don't think the paper offers much over this work. Would you please explain to me, in clear and precise terms, the contribution of this submission relative to this work?
3. Missing baselines: give the point of this paper is the comparison (at least), it is important to be comprehensive in terms of the baselines covered. I would like to see SNGP, DUQ, and some frequentist approaches like temperature rescaling. This is a clear weakness of the paper. Further, not including function-space approaches is a major limitation: the paper claims to do a thorough investigation, and leaves out an important class of methods.
4. A key point of the paper is that existing evaluations e.g., on CIFAR-10-C are unrealistic because the distribution shift is constructed and not real world. I think this point is interesting, but I would like to see evidence to support it. For example, if one would make different (and importantly different) conclusions based on benchmarking on CIFAR-10-C vs on WILDS. Further, other BDL papers do look at WILDS datasets e.g. [4], in case you weren't aware of this. Evaluating on WILDS is not sufficient for novelty, in my view.
5. "modern single-model BDL algorithms approximate the parameter posterior better than deep ensembles ..." Well, for one, your evaluation looks at total variation in predictive space, so you can make no claim about the __parameter__ posterior. And, furthermore, recent work [5] shows that the HMC chains from the Izmailov et al paper have not converged well, so I don't think a claim about "approximating the posterior" is valid, you can only claim to approximate the HMC chains. And since the HMC chains perform badly out of distribution, it is not clear that lower total variation is better! See [5] on that, which argues that we might not want to be doing full network inference in the first place.
6. The writing of the experiment section is very poor. The figures aren't referenced in the text, the writing is mostly around the results, but offers little in terms of understanding, insight, conclusions, or discussion. I found this very hard to read, and I am left confused: what are your take away messages? What is the contribution of the work?
7. A claim is made "BDL is in many cases competitive with algorithms that are specifically designed for OOD generalisation" (Line 393,394). This does not seem to be supported: what benchmarks did you compare to that are specifically designed for OOD data?
Minor points:
- "typical deep neural networks are highly confident on o.o.d. data". I'd argue this is a bit different with large pretrained models.
- Since your results used pre-trained model, [6, 7] are relevant citations because they justify pre-training in a Bayesian setting.
[1] Nado, Zachary, et al. "Uncertainty Baselines: Benchmarks for uncertainty & robustness in deep learning." arXiv preprint arXiv:2106.04015 (2021).
[2] Liu, Jeremiah, et al. "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness." Advances in Neural Information Processing Systems 33 (2020): 7498-7512.
[3] Van Amersfoort, Joost, et al. "Uncertainty estimation using a single deep deterministic neural network." International conference on machine learning. PMLR, 2020.
[4] Daxberger, Erik, et al. "Laplace redux-effortless bayesian deep learning." Advances in Neural Information Processing Systems 34 (2021): 20089-20103.
[5] Sharma, Mrinank, et al. "Do Bayesian Neural Networks Need To Be Fully Stochastic?." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
[6] Shwartz-Ziv, Ravid, et al. "Pre-train your loss: Easy bayesian transfer learning with informative priors." Advances in Neural Information Processing Systems 35 (2022): 27706-27715.
[7] Sharma, Mrinank, et al. "Incorporating Unlabelled Data into Bayesian Neural Networks." arXiv preprint arXiv:2304.01762 (2023).
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: See also the weaknesses for the list of questions. Some additional questions are:
1. What are the differences between WILDS and e.g., CIFAR-10, other than the "realistic" dataset shift? Would you be more satisfied with BDL people evaluating on Imagenet-C?
2. How are hyperparameters selected?
3. How do you tune the prior precision for the Laplace approximation? In my experience, it can be quite difficult to tune well. I tried to look in the appendix, but it said that I didn't have permission to view the file.
4. Line 328 says Bayes-by-Backprop has a "last-layer nature". Why? People perform BBB on the full network.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: Seems fine to me; see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your effort in reviewing our paper. We are happy that you found our results interesting and that you think there is a need for benchmarking BDL techniques. We will address your concerns regarding our evaluation below and refer to our global response, which summarizes our takeaway messages, presents preliminary results for SNGP, discusses our algorithm selection in detail, and provides the list of citations.
Overall, we believe our results are highly relevant to the BDL community since we compare many current SOTA algorithms (MultiX, natural gradient VI) on a diverse range of realistic tasks (finetuning, transformer-based models, large-scale regression) and find surprising results (cf. our takeaway messages). Our results allow a direct comparison with the non-Bayesian OOD generalization baselines of [2]. Reviewers Sjhm, 1hhr, hcj9, and CBPk agree with us on the potential impact of the results.
**Batch normalization.** We discuss this effect in more detail in Appendix E and refer to [1] which demonstrates the problem with Batch Normalization on more datasets.
**Size of the contribution.** We agree that clear and actionable takeaway messages are very useful and will add them, as stated in the global response, to our revised conclusion. As we wanted to focus on the evaluation of a wide range of methods on many diverse datasets, we decided against investigating a single phenomenon in depth. This approach follows similar large-scale evaluations published at NeurIPS both within BDL [5,8] and other domains [14,15] (see the references below). Further, NeurIPS 2023 explicitly invites "Evaluation (e.g., methodology, meta studies, replicability and validity)" submissions.
**Performance of BBB.** Please note that the performance of BBB on the image classification tasks is in line with the results of [2] on similar tasks. Further, the regularization of MAP on the text classification tasks is strong (weight decay of ${10^{-2}}$, as mandated by [3]). Reducing the weight decay makes MAP more accurate on CivilComments, but still less accurate than BBB. As shown in Figure 3 in the uploaded PDF, reducing the regularization of BBB by introducing a tempering factor does not significantly change BBB's accuracy. These results indicate that BBB's performance is at most partially due to a difference in regularization, and otherwise represents an actual benefit of performing mean-field VI. We will add an extended discussion of this topic to the paper revision.
**Distinction to Nado et al.** We thank you for the suggestion to cite [2]. We added a reference and short summary in the related work section. We believe that our paper offers a number of important new results compared to their work:
- A systematic evaluation of ensembles of probabilistic single-mode posterior approximations (MultiX). Reviewers hcj9 and 1hhR agree with us that this evaluation is highly relevant. Further, we include approaches not considered in [2], e.g., the Laplace approximation and iVON (natural gradient VI).
- A systematic evaluation of BDL algorithms for the finetuning of pre-trained models.
- The inclusion of a large-scale regression task. To the best of our knowledge, such an evaluation is yet completely missing from the BDL literature, including [2], who only evaluate on a number of small UCI datasets.
- Insights into the posterior approximation quality by comparing to the HMC samples from [6].
- Strict adherence to the evaluation protocol of [3], which allows for a fair comparison with the non-Bayesian baselines on the WILDS leaderboard.
**Missing baselines.** Following your suggestion, we add a comparison with SNGP during the revision. We include preliminary results in the uploaded PDF file (Figure 1a). Furthermore, we discuss our algorithm selection in the global response and add this discussion to the appendix.
**Realism of evaluations.** We want to clarify that we do not think that evaluations on CIFAR-10-C are unrealistic, but one of our main research questions is whether the conclusions on these artificial distribution shifts transfer to real-world distribution shifts. Regardless of our concrete results, we see this as an important point by itself, as it has direct implications for the applicability of BDL. We do find novel and surprising results (c.f. takeaway messages). Please also note that while [9] evaluates on WILDS, they finetune models from [2], originally trained with algorithms that are specially designed for domain generalization. This prevents a fair comparison of the algorithms' performance in isolation. Further, they consider a significantly smaller number of algorithms.
**Comparison with HMC.** We changed the formulations in the paper to be more cautious about the implications in parameter space. We now refer to [7] to acknowledge the problems with the HMC samples, but argue that they currently still provide the best available way to assess the posterior approximation quality on a complex task.
**Competitiveness with special OOD generalization algorithms.** We added an explicit reference to the WILDS leaderboard [3] which provides an extensive selection of OOD generalization algorithms.
**Overconfidence of large pretrained models.** We agree that such issues are less severe on large models, yet, our results for MAP on FMoW (Figure 3b) and CivilComments (Figure 4) show that these models can still be highly overconfident on OOD data.
**Hyperparameters.** We selected hyperparameters based on a combination of grid search and default values, and tuned the prior precision of Laplace based on the marginal likelihood of the training data.
Finally, we appreciate your suggestions regarding our writing style and additional citations and will include this feedback in our revision.
[14] Veilleux, Olivier, et al. "Realistic evaluation of transductive few-shot learning" NeurIPS 2021
[15] Setlur, Amrith, et al. "Two sides of meta-learning evaluation: In vs. out of distribution" NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: Thanks for the comments. They make sense. Overall, I have doubts over the contribution of the paper still, and how interesting the findings are. I raise my score to borderline reject.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response and for raising the score! We are interested in details about the concerns that were not addressed by our rebuttal. We would be grateful if you could elaborate a bit more so that we have the chance to further improve our paper. In particular, if you think any specific additional experiments would strengthen the paper, please let us know. | Summary: This paper presents an empirical comparison between various Bayesian methods/approaches on out-of-distribution (OOD) data. The authors focus on non-MCMC based methods (such as Bayes By Backprop, SWAG, and Laplace approximation). Methods are compared on challenging OOD benchmarks from the WILDS collection and using several network architectures. All compared methods are evaluated in two flavours, first as a single-mode posterior and second in an ensemble of models. Results are somewhat inconclusive, different methods tend to work better in terms of calibration and generalization on different datasets.
Strengths: * I think that the paper makes a nice contribution to the community. It may be hard to pick a model that perform well on OOD data in the Bayesian model zoo, and this paper makes another step towards a better understanding of this question.
* These types of projects are hard to manage. There are many choices to be made in terms of methods, datasets, evaluation metrics, etc. I think that the authors made good choices in all of these aspects. For instance, they picked methods that reflect typical Bayesian methods people use, and they focused on 2-3 evaluation metrics that capture both uncertainty quantification and generalization capabilities adequately.
* For the most part, the paper is written well. The authors justify many of their choices.
* Evaluating the ensemble version of single-mode-posteriors is a good idea and was mandatory in my opinion.
* The results seem reproducible, fully experimental details were given and the code was provided.
Weaknesses: * I find it hard to understand the key takeaways. I think the authors should present the main conclusion from each experiment and the key takeaways from all experiments (e.g., inside a text box with some background color, or in bold).
* As the results are inconclusive, e.g., Rank-1 VI is best calibrated on PovertyMAP, but less so on IWILDCAM, FMOW, and RXRX1, I believe it would have been beneficial to suggest a possible explanation (or perform an empirical analysis) for the reasons that make a method work better on some datasets and worse on others.
* An important evaluation metric that is missing in my opinion is in terms of computational complexity (memory and time of each method). It may be important information when comparing the methods as well.
* The authors evaluate last-layer Bayesian methods, which is important and great. An important method that I find missing is deep kernel learning [1, 2]. It is also a popular last-layer Bayesian method and, from my experience, it tends to work better than most of the compared methods.
* In terms of exposition:
* I think that the authors should explain at the beginning of Section 5 or 5.1 that the analyses of the results are given in Sections 5.3 and 5.4. It is not clear until reaching there.
* In the figures, the font of the tick labels, axis labels, and method names should be larger.
[1] Wilson, A. G., Hu, Z., Salakhutdinov, R., & Xing, E. P. (2016, May). Deep kernel learning. In Artificial intelligence and statistics (pp. 370-378). PMLR.
[2] Calandra, R., Peters, J., Rasmussen, C. E., & Deisenroth, M. P. (2016, July). Manifold Gaussian processes for regression. In 2016 International joint conference on neural networks (IJCNN) (pp. 3338-3345). IEEE.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: * I may be wrong, but from the main text it seems that you evaluated BBB on the layer only, why not use it on the full network?
* The reference format is a bit odd. For instance, not all authors names are written. Please check that you adhere to NeurIPS guidelines.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors addressed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your overall positive remarks. We are happy that you think "that the paper makes a nice contribution to the community" and that we made "good choices" regarding methods, datasets, and metrics. We address your remaining questions below. Please also see the global response for a list of takeaway messages, results for SNGP, and a list of all citations.
**Key takeaways.** We added an itemized list of takeaway messages as part of the conclusion. Please see our global response for the takeaway message.
**Explanation of performance differences.** Please note that our results are largely conclusive within groups of similar tasks, e.g. the image classification tasks iWildCam, FMoW, and RxRx1, and the text classification tasks CivilComments and Amazon. For Rank-1 VI in particular, we strongly suspect that the good performance on CIFAR (classification) and PovertyMap (regression) and the bad performance on the other datasets indicates that the multi-modal rank-1 factors are only sufficiently multi-modal when performing inference on the full network, but not when performing inference on only the last layers. Following your suggestions, we add a more clear discussion of this point in the revised manuscript.
**Computational complexity.** We agree that computational complexity is an important metric. Please note that we report the runtime of each method on each dataset in the appendix (Section F, Tables 1 and 2). We will add a similar comparison for the memory consumption as given below:
| Method | Memory consumption relative to MAP |
|------------|--------------------------------------------------------------------|
| MCD | 1 |
| SWAG | 1 |
| LL Laplace | $\sim 1$ |
| Ensembles | 1 (since the models can be trained independently) |
| BBB | 2 |
| Rank-1 VI | $\sim 1$ + #components $\cdot \sqrt{\text{parameter count}}$ |
| SVGD | #particles |
| iVON | $\sim 2$, but no additional memory overhead due to an optimizer |
The memory overhead is of course lower when using the algorithms only on the last layer(s).
**Deep Kernel Learning.** Thank you for the suggestion. We added SNGP, a scalable variant of DKL, as an additional baseline. Please see our global response for further details and a general remark regarding our algorithm selection.
**Overview of Section 5.** Thank you very much for the suggestion. We added a short overview over the section as well as references to the subsections at the beginning of Section 5.
**Font size in figures.** Again, thank you very much for the suggestion. We will take it into account for the next revision of the paper.
**Last-layer BBB.** On CIFAR-10-(C) and PovertyMap, we trained BBB on the full network. On the other tasks from WILDS, we trained BBB only on the classification heads of the respective models. This was mainly done to limit BBB's runtime, which is of major concern given its computational overhead.
**Reference format.** Thank you very much for the notice. We will check all references in the paper and ensure that they comply with the NeurIPS guidelines in our revision.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: I thank the authors for the response. I believe the authors addressed some of the concerns well (including mine) while less so to others, such as the need for a proper analysis as to why and when a certain method is preferred (which was also raised by Reviewer eT5S). Overall, I believe that this paper makes a nice contribution to the community and therefore I retain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response. We are happy that you find our paper to make a "nice contribution to the community". | Summary: This paper presents a systematic comprehensive evaluation of a wide range of scalable Bayesian deep learning (BDL) algorithms in distribution-shited real-world data scenarios. A signed version of calibration error is presented, which allows for the identification of overconfidence and underconfidence rather than only absolute calibration. The study bolsters the ongoing effort in the BDL community to establish the practicality of BDL algorithms in safety-critical settings.
Strengths: In my opinion, this line of work has huge potential but is in dire need of a "reality check" in order to make tangible, impactful progress in real-world safety-critical settings. I believe this paper makes real progress in that direction. Indeed, the primary motivating statement behind uncertainty estimation is one which appeals to its utility in safety-critical applications, i.e. how and when can we trust our predictions. However, perhaps ironically, much emphasis has been placed on reiterating the former aspect by proposing more sophisticated uncertainty estimation algorithms, and not enough emphasis has been placed on establishing their utility and reproducibility in the very safety-critical settings they were designed for.
The paper is well-written and I found the empirics to be thorough and convincing. I believe this work will help guide the BDL community toward more practicable outcomes so I recommend acceptance.
Weaknesses: One weakness of this type of empirical review is that - although appropriately broad and comprehensive for a conference submission - the empirical study can always be extended to other tasks which may or may not change the conclusions. Another weakness is the lack of novelty from a methodological perspective; although I do appreciate the proposed usage of a signed calibration error to identify under- or overconfidence without having to look at reliability diagrams.
I would suggest including an itemized list of the main takeaways from the empirical study in the conclusion/discussion section of this paper. I think this would help organize the findings of the study and would provide easy-to-follow actionable advice for BDL practitioners going forward.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: -Can you summarise the implications for practitioners who use large-scale models and need to make decisions under distribution shifts?\
-Do you believe there is an inherent trade-off between accuracy, efficiency and calibration? \
-In your opinion, what are the key research directions to further improve the generalization and calibration capabilities of BDL in real-world distribution-shifted data scenarios?\
-How do frequentist methods like conformal prediction factor into this study?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: See the weaknesses section above.
-Figures 3 and 4 are a bit too small, consider using a horizontal legend for example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for your helpful and positive feedback. We are thrilled that you find that our paper "makes real progress" towards practically applicable BDL and that it is "thorough and convincing". We answer your remaining questions below. Please also see the global response for a list of takeaway messages, results for SNGP, and a list of all citations.
**Itemized list of takeaways and implications for practitioners.** Thank you very much for the suggestion. We added a list of takeaway messages to the conclusion and provide it here on OpenReview as part of the global response.
**Trade-off between accuracy, efficiency, and calibration.** Given our experiments, we do see a "pick only two of the three options" situation in the current state of the art. Given our current understanding of BDL, accurate posterior approximations should achieve accuracy and calibration gains, but are currently very expensive to obtain. However, we think it is conceivable that future algorithms may allow cheap but accurate Bayesian inference.
**Key research directions.** We think that not only from our research it has become clear that multi-modal posterior approximations are central to BDL. While MultiX works well, it is computationally expensive and therefore unlikely to be of practical use in the current world of models with ever-growing parameter counts. Therefore, we think cheap multi-modal posterior approximations are one of the most important future research directions. In a similar spirit, we think post-hoc BDL algorithms such as the Laplace approximation have significant potential to be used in practice to provide uncertainty estimates on top of already trained models. More broadly, finetuning with Bayesian principles as evaluated by us is promising due to its relative computational cheapness, but requires further research, maybe even in the form of BDL algorithms specifically designed for finetuning. We will include some discussion along these lines in the conclusion of the paper revision.
**Frequentist methods.** Frequentist methods are certainly an important and in many cases well working approach to uncertainty quantification. However, we see them as an orthogonal approach to the Bayesian methods that we evaluate, which build on the idea of inferring a posterior over the neural network's parameters. For practical applications of uncertainty quantification, of course all types of methods should be considered, including frequentist/deterministic uncertainty quantification methods. Please also see our global response for an extended discussion of our algorithm selection.
Finally, we appreciate your suggestions regarding Figures 3 and 4 and will include this feedback in our revision.
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: I thank the authors for the detailed response, I am happy with the answers and retain my original score. I would only like to re-emphasize the need to include a list of easily digestible takeaway findings in the discussion/conclusion section of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you again very much for your helpful and positive feedback! We will make sure to add the takeaway messages to the conclusion of the revised manuscript. | Summary: The paper conducts a large-scale benchmark of Bayesian deep learning (BDL) methods for distribution-shifted data. The focus is on evaluating the quality of the posterior approximation, generalization ability, and calibration using signed versions of calibration metrics to distinguish between under- and overconfidence. The BDL algorithms are evaluated on convolutional and transformer-based architectures, mostly on regression and classification tasks from the WILDS collection. For most tasks, the experiments show improved generalization and calibration when extending single-mode approximations to multiple modes through ensembling. The authors identify a limitation of ensemble-based methods when finetuning transformer-based language models. In this task, accuracy and calibration are not significantly improved over single-mode approximations.
Strengths: Although there are similar experimental surveys of BDL algorithms, the paper benchmarks a wider range of state-of-the-art BDL algorithms applied to modern neural network architectures. Considering the success of ensembling in applications and recent competitions, the choice of focusing on the evaluation of ensembling single-mode BDL algorithms is relevant. The lack of improvement in the generalization and calibration of such methods in transformer-based finetuning tasks is insightful. The introduced signed calibration metrics are useful for making the distinction between over- and underconfident models. The authors have included code with their submission, which is useful for reviewing the details of the experiments and algorithmic implementations. The experiments are described with sufficient clarity and detail.
Weaknesses: 1) The paper focuses largely on finetuning tasks, with the exceptions of corrupted CIFAR-10 and the large-scale regression task. The limitation should be described in the “Limitations” section or amended by including further experiments of models that are trained from scratch. Some of the algorithms have specific weaknesses or strengths that become only evident in such settings. For instance, ensemble-based methods may become computationally restrictive. Methods like iVON, SVGD, and BBB that modify the training objective or introduce noise to regularize the training may lead to base models with improved generalization or calibration. When focusing on finetuning of pre-trained models, these effects are neglected to some extent.
2) The experiment on the RXRX1- WILDS datasets in Fig. 3 lacks results for several algorithms. The figure caption and main text do not justify this choice. The exception is the Laplace approximation, where underperformance is noted as a reason.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) In the conclusion, the authors state that “[...] we demonstrated that BDL is in many cases competitive with algorithms that are specifically designed for o.o.d. generalization.” To justify that conclusion, additional comparisons to such algorithms, either from existing works or through experimental evaluation, should be added. Except for a hint to the WILDS leaderboard for the large-scale regression experiments, comparisons to specialized methods that are designed for o.o.d. generalization are missing from the main text.
2) One main result of the paper is that ensembling-based BDL methods do not systematically improve upon the single-mode counterparts in finetuning of transformers-based language models. The authors hypothesize that this finding may be due to the nature of the finetuning task. There, the initialization from a pre-trained model may lead to a lack of diversity among the ensemble members. Considering the results of the “Finetuning for CNNs” section, where ensemble methods yield clear improvements, the hypothesis is not fully supported. To clarify the reason for the lack of performance of ensemble-based methods, results on a second architecture in addition to DistillBERT would be insightful. Experimental results to identify the mechanism among task, architecture (BERT-specific?), training procedure, or others, would further strengthen the submission.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you a lot for your helpful and positive feedback. We are happy that you find our choice of algorithms to be "relevant", and the failure of ensembles on transformer-based finetuning tasks to be "insightful". We answer your remaining questions below. Please also see the global response for a list of takeaway messages, results for SNGP, and a list of all citations.
**Finetuning tasks.** We added the following paragraph to the limitations:
*Except for the results on CIFAR-10-(C) and PovertyMap-wilds, all results were obtained by finetuning pre-trained models and are therefore only valid in this setting.*
Note that even when finetuning, training an ensemble requires training multiple models independently with the associated computational cost, since we finetune all layers of the models. Of course, training is still significantly less computationally expensive than when training all members from scratch.
**Missing results for RxRx1 in Figure 3.** We excluded the VI algorithms for the same reason as Laplace: They are significantly less accurate than the other methods and including them in the figure would make the figure hardly readable.
The results can be found in the appendix in Figure 14. We added a reference to this figure in the caption of Figure 3 and made the reason for exclusion clearer.
**Comparison with additional baselines.** Thank you very much for the suggestion. Since the WILDS leaderboard [3] contains extensive baselines for all WILDS tasks, including the classification tasks, we will include a small selection of results from WILDS and a more prominent reference to the leaderboard to better support our conclusion.
**Additional results on transformers other than BERT.** Thank you very much for the suggestion. We agree with you that results on an additional architecture would be helpful, however we do find this to be infeasible given the significant computational cost of training ensembles. We decided on the two WILDS tasks due to the availability of baselines for this tasks on the WILDS leaderboard [3]. However, we do have additional results available for models trained with only very small regularization, as we hypothesized that this would allow the ensemble members to better diversify. Please see Figure 3 in the uploaded PDF. On CivilComments, we found that the accuracy of MAP increases when only using a weight decay factor of $10^{-4}$ (though not to the accuracy level of BBB), but the Deep Ensemble still does not perform better than a single model. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and efforts in providing detailed and insightful feedback, which we will incorporate in our revision.
**We are pleased that the reviewers are convinced that our paper "provides a useful benchmark of BDL on more realistic datasets" (Reviewer CBPk) and is "appropriately broad and comprehensive for a conference submission" (Reviewer Sjhm). The reviewers think our results are "insightful" (Reviewer 1hhR) and "convincing" (Reviewer Sjhm), and that the paper "makes real progress" (Reviewer Sjhm) towards real-world applicability of BDL.**
Reviewer eTS5 was hesitant to recommend acceptance due to similarities to other BDL evaluation papers and a focus on the evaluation itself, rather than a deep investigation of singular results. Therefore, we would like to emphasize that our work contains a number of novel results which we judge to be highly relevant for practitioners as summarized below.
## Key Takeaways
As requested by reviewers Sjhm, hcj9, and eTS5, we would like to summarize our key takeaways, which we expect to have direct implications for practitioners and method developers and have been unknown or not thoroughly shown until now. Since we evaluated on diverse, realistic distribution-shifted data, the results are likely directly applicable to many real-world applications of BDL.
- Finetuning only the last layers of pre-trained models with BDL algorithms gives a significant boost of generalization accuracy and calibration on realistic distribution-shifted data, while incurring a comparatively small runtime overhead. These models are in many cases competitive to or even outperform methods that are specially designed for OOD generalization such as IRM [12] and Fish [13].
- For CNNs, ensembles are more accurate and better calibrated on OOD data than single-mode posterior approximations by a wide margin, even when initializing all ensemble members from the same pre-trained checkpoint with only the last layers differently initialized, i.e. when not using the standard protocol of randomly initializing all ensemble members. Ensembling probabilistic single-mode posterior approximations such as SWAG or MCD yields only a small additional increase in accuracy and calibration.
- When finetuning large transformers, ensembles yield no benefit. Compared to all other evaluated BDL algorithms, classical mean-field variational inference achieves significant accuracy gains under distribution shift. We want to emphasize the importance of this point: Our results show that ensembles, which are typically considered to be the SOTA in BDL, do not work well for finetuning large transformers, a setting that is highly relevant given the recent success of large language models.
In the paper revision, we added these points prominently as part of the conclusion of the paper.
## Algorithm Selection
**We follow the requests of reviewers eTS5, CBPk, and hcj9 and add SNGP [10], a scalable variant of DKL, as an additional baseline for the WILDS tasks. We provide preliminary results in the additionally uploaded PDF file (Figure 1).** Please note that these results are not directly comparable to our other results and the WILDS baselines due to necessary modifications of the architecture, in particular the addition of a GP. Due to the limited timeframe, we could so far only evaluate with the default hyperparameters from [19]. We are working on completing the results for all WILDS datasets.
Furthermore, as multiple reviewers requested additional algorithms, we want to clarify our selection of algorithms. We selected the algorithms based on the following criteria, that ensure scalability of the algorithms and allow a fair comparison with the non-Bayesian baselines of WILDS:
- Algorithms must scale to large parameter counts and datasets. This excludes MCMC-based methods.
- It must not be required to modify the underlying neural network. This ensures that the comparison with the WILDS baselines is fair, and excludes GP-based methods such as DKL, SNGP, and DUE.
- Algorithms must approximate the parameter posterior of the underlying neural network. This allows us to compare the algorithms with the HMC samples from [6], and excludes deterministic UQ methods such as DUQ.
- Algorithms must be applicable to both classification and regression problems. This excludes e.g. fBNN.
We belief that these criteria make for a fair comparison that is relevant to practitioners. In particular, reviewers hcj9, Sjhm, and 1hhR agree with us on the relevance of the selected algorithms.
[1] Schneider, Steffen et al. "Improving robustness against common corruptions by covariate shift adaptation" NeurIPS 2020
[2] Nado, Zachary, et al. "Uncertainty Baselines: Benchmarks for uncertainty \& robustness in deep learning." arXiv 2021
[3] Koh, Pang Wei, et al. "WILDS: A Benchmark of in-the-Wild Distribution Shifts" ICML 202
[4] Wilson, Andrew G., et al. "Bayesian Deep Learning and a Probabilistic Perspective of Generalization" NeurIPS 2020
[5] Band, Neil, et al. "Benchmarking Bayesian Deep Learning on Diabetic Retinopathy Detection Tasks" NeurIPS 2021 Datasets and Benchmarks Track
[6] Izmailov, Pavel, et al. "What Are Bayesian Neural Network Posteriors Really Like?", ICML 2021
[7] Sharma, Mrinank, et al. "Do Bayesian Neural Networks Need To Be Fully Stochastic?" AISTATS 2023
[8] Ovadia, Yaniv, et al. "Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift" NeurIPS 2019
[9] Daxberger, Erik, et al. "Laplace redux - effortless bayesian deep learning" NeurIPS 2021
[10] Liu, Jeremiah, et al. "Simple and principled uncertainty estimation with deterministic deep learning via distance awareness" NeurIPS 2020
[11] D'Angelo et al. "On out-of-distribution detection with Bayesian neural networks" arXiv 2022
[12] Arjovsky, Martin, et al. "Invariant risk minimization" arXiv 2019
[13] Yuge, Shi et al. "Gradient Matching for Domain Generalization" ICLR 2022
Pdf: /pdf/a2c91af4c476ba88706f15540c9930ae8c2e69aa.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper compares a variety of Bayesian Deep Learning methods on the WILDS dataset collection comprised of regression and classification tasks with the aim of evaluation generalization performance under distribution shifts. In addition to previously proposed single-mode approximation methods, the authors extend the methodology to sampling ensembles of these kinds of methods and show an improvement on various tasks. Additionally, the authors adapt existing calibration metrics to a signed version that is able to reflect under- or over confidence more directly.
General comment:
- As you state, your paper is in line with the work of (Band et al. 2021) which was submitted to and accepted at the Neurips Benchmark track. The [Neurips website](https://nips.cc/Conferences/2023/CallForDatasetsBenchmarks) states that it is suited among other criteria for "Systematic analyses of existing systems on novel datasets yielding important new insight." which to me perfectly describes this work and in conclusion seems to be a better fit than the main track.
Strengths: - extensive experiments across image classification and regression tasks as well as text based classification
- released code can be run without errors and provides adequate comments about running the code
- the methodology of the paper builds on established datasets and methods and provides a useful benchmark of BDL on more realistic datasets
Weaknesses: - I believe the paper could benefit from a small discussion about why Bayesian Deep Learning in general should be well suited to OOD tasks since Bayesian methods through Bayes rule assume that the test data follows the same distribution as the training data. A relevant discussion seems to be this paper for example (https://arxiv.org/abs/2110.06020)
- I think the training/evaluation setup over the different folds and how those folds are created should be explained more explicitly
- The wide variety of methods you compare is a great strength of the paper, however, I was wondering why you did not include Deep Kernel Learning (DKL) ([Wilson et al. 2015](https://arxiv.org/abs/1511.02222)) or an "improvement" of it like DUE ([Amersfoort et al. 2021](https://arxiv.org/abs/2102.11409)) since their experiments show good generalization performance on a variety of tasks and are also theoretically grounded in Gaussian Processes which many regard as a "gold standard" in Bayesian Machine Learning and UQ
- you state that you provide the first systematic evaluation of BDL for finetuning large pre-trained models, but I would argue that the common practice of using imagenet weights for example is also finetuning? Or are you saying your experiments are the first systematic evaluation with respect to binary text classification?
- the performance of "generalizability" seems to focus on accuracy metrics such as the pearson coefficient or F1 score for classification, however, isn't a more suitable metric the NLL or other proper scoring rules [Gneiting and Raferty 2007](https://www.tandfonline.com/doi/abs/10.1198/016214506000001437) because accuracy metrics ignore the predictive uncertainty? I am aware that these are provided for the regression task in the appendix but should they not be part of the main analysis?
Extra comments:
- On some tables in the Appendix I am not clear about the use of bold face, for example in Table 8 the entire MSE column is bold, or I misread the table?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I am slightly confused by how the methodology and code defines a predictive distribution or predictive uncertainty. It appears that you use a Gaussian Mixture Model approach for the Deep Ensembles, but in the appendix you state that "We optimize the log likelihood of the training data and use a fixed standard deviation of 0.1, as this is the value MAP converges to when jointly optimizing the standard deviation and the model's parameters". Is this supposed to be an estimate of the aleatoric uncertainty that is added to the epistemic uncertainty you obtain when computing a variance over the sampled point predictions?
- Why can a pretrained model like resnet18 not be used for the povertyMAP task, but instead you train a model from scratch? At least I was not able to find loading an image-net pretrained weights in the code and there is lots of papers in the earth observation domain that show that image-net outperforms training from scratch even on satellite based optical imagery (as a latest example [Corley et al 2023](https://arxiv.org/pdf/2305.13456.pdf))
- from my understanding a strength of the Laplace approximation is that it yields an uncertainty estimate on top of the MAP estimate and it should theoretically yield the same mean prediction as the MAP model which should be reflected in the same accuracy metric, however, this is not the case in almost all experiments, like Figure 2 and 3. I understand that you are using a sampling approach but perhaps you are not using enough samples? A striking example seems Figure 3 where you state that Laplace underperforms on FMoW and RxRx1 while a Deep Ensemble that is also based on MAP estimates performs well.
- As you state in your conclusion, BBB performs quiet well on a variety of tasks but it appears they are always beat by the much simpler MC-Dropout approach in terms of the TV metric when compared to the gold standard HMC uncertainty estimation (Figure 5). I was wondering whether you could comment on this discrepancy and whether you have a possible explanation for this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are accurately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your constructive and detailed feedback. We hope the following points help to resolve all your questions regarding our work. If you have any further questions, please let us know so that we can clarify things further. Please also see the global response for a list of takeaway messages, results for SNGP, and a list of all citations.
**NeurIPS Benchmark track.** We agree with you that the Benchmark track would have also been a good fit for our paper. Before submitting the paper, we reviewed both the main track and the Benchmark track and decided for the main track, since the NeurIPS Call for Papers for the main track explicitly lists "Evaluation (e.g., methodology, meta studies, replicability and validity)" as a submission topic and the Benchmark track's Call for Papers states "It is also still possible to submit datasets and benchmarks to the main conference".
**Discussion about the usefulness of BDL.** Thank you very much for the suggestion. We added the proposed discussion to the introduction, mentioning both the theoretical results of [12], the intuition about the Bayesian Model Average given by [4], and the good OOD calibration results that BDL can achieve in practice as reported by practical evaluations [1, 4, 6], including our own.
**Training/Evaluation Setup.** We added an explicit description of the folds and training/test setup to Appendix G.3. For all WILDS tasks, we used the same splits as [3] to ensure that our results are directly comparable with their results. We used in-distribution validation splits and, where available, OOD validation splits for hyperparameter optimization, in line with the protocol of [3].
**Deep Kernel Learning.** Thank you for the suggestion! We added SNGP, a scalable variant of DKL, as an additional baseline. Please see our global response for further details.
**Initialization from ImageNet weights.** We agree with you that initializing from ImageNet weights can also be considered finetuning. However, to the best of our knowledge, our evaluation is the first to systematically evaluate finetuning across a wide range of BDL algorithms and tasks. For example, neither of the BDL evaluations of [8] and [5] consider finetuning, and [2] do so only in a very limited fashion. We clarify this point in the revised version of our manuscript.
**NLL Metric.** We decided to specifically separate predictive accuracy and uncertainty, as we find the NLL to be hard to interpret, given that it mixes both the accuracy and calibration into a single metric. Especially when considering practical applications of BDL, we believe that it is relevant to be able to assess accuracy and uncertainty independently, which is why we decided to provide these metrics in the main text and move the NLL to the appendix. In our revised paper, we more prominently reference the NLL and also include it for all tasks where it is not yet present in Appendix G. We also provide the extended tables for FMoW and CivilComments in the uploaded PDF file (Table 1).
**Bold face in tables.** The bold face is intended to highlight the best performing methods within two times standard error. However, there are a few tables in the appendix where we incorrectly highlighted all numbers. Thanks for catching this! We will fix it in the revised version.
**Predictive Distribution & Ensembles.** As you correctly state, we treat ensembles as mixture models where each mixture component receives equal weight. The standard deviation in the NLL of PovertyMap does indeed represent the aleatoric uncertainty. We clarified this in Appendix G.3.2.
**Pretraining on PovertyMap.** We agree with you that a pretrained model would most likely have improved the model's accuracy. However, our goal was to stay consistent with [3] whenever possible, so that our results can be fairly compared to their non-Bayesian baselines. In particular, [3] do not use a pretrained model on PovertyMap (see Appendix E.8.1 of [3]).
**Performance of Laplace.** Following your remarks, we add an extended discussion of how we made sure that our implementation works correctly in our revised manuscript and also in the uploaded PDF file. To check for sampling issues, we evaluated Laplace with 1000 posterior samples on FMoW and 100 on RxRx1 and found an increase in accuracy, with Laplace still performing worse than MAP (see Figure 2 in the uploaded PDF file). We will rerun the Laplace evaluations for all tasks and include the results in the revision of the paper, with an explicit mention of the different number of posterior samples. Further, we will add a discussion of the sampling issues in the appendix. To further check our implementation, we evaluated the Laplace model at its parameter posterior mean on FMoW and recovered the accuracy of MAP.
**BBB vs. MCD.** The agreement with HMC reported in Figure 9 and Table 7 in our appendix may give an insight into this: BBB's and HMC's predictions more frequently differ from each other than MCD's and HMC's, which leads to a high TV in case of differing predictions. Further, our evaluation shows that the results on CIFAR, which is the only task where HMC samples are available to evaluate the TV, cannot be transferred to the text classification tasks where BBB performs particularly well compared to MCD.
---
Rebuttal Comment 1.1:
Comment: Thank you for your elaborate response to the raised questions and the effort put forward around additional experiments and improvements to the manuscript and therefore raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response and for raising the score. We are happy that we were able to address your concerns. | null | null | null | null | null | null |
RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths | Accept (poster) | Summary: This paper presents a new large model RAPHEL (short for distinct image regions align with different text phases in attention learning ) for text-to-image generation. Technically, RAPHEL builds upon the LDM pipeline, with VAEs as image encoder-decoder, and then incorporates the MOE layers for spatial and temporal (in terms of diffusion steps) refinement in the diffusion generation process to improve the text and image fidelity. For the experiments and evaluations, the model is trained with LAION-5B and some confidential internal data, comparisons w/ other large models show better performance using the zero-shot FID on the COCO.
Strengths: - The paper is well written with a clear structure and easy to follow.
- The model achieves sota FID and qualitative results compared to other strong and powerful similar scale models like Stable Diffusion, DeepFloyd and DALLE2.
- The high-level idea to incorporate MoEs to refine the spatial and temporal details in DPMs for text2image synthesis is intuitive and reasonable, with effectiveness proved in ablation studies.
Weaknesses: - This is another large-scale model work that requires 1000 A100 GPUs with 2-month training on LAION-5B plus internal dataset, while the reviewer acknowledges the popularity of the topic and its superiority in performance, it is another work that can be hardly reproduced by most researchers in the field, especially w/ internal training data inaccessible for the community.
- In terms of the methodology design, while the idea to refine the generation process w/ MoEs is intuitive, the technical novelties are rather limited, and are largely limited to this specific text2img task, as there are several existing works w/ similar ideas [4,5].
- Some technical details remain rather coarse and unclear. See details in my questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I have several questions regarding several technical aspects listed below:
- I am still confused on the working mechanism of time-MoE after reading the paper and appendix. For the time-MoE, it is a Time Gate Network at each diffusion step, located between the cross-attention and space MoE. The output of time-MoE is fed into the space-MoE, then what does this info depict for space-MoE at different diffusion steps?
- How does this work in inference, if the info from time-MoE does convey critical information in terms of steps, then how can the info be used in inference especially with the skipping sampling steps?
- What is the inference time cost using the proposed RAPHEL compared to other popular models?
- How does the internal data impact the final performance? Does the performance change evidently w/o the inaccessible internal dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper discusses the limitations and potential negative impact on the risk of generating images with misleading and false information.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer fnVX,
Thanks for your comments. We will address your concerns below.
**Q1: RAPHAEL uses internal datasets and many computing resources**
We argue that this is not a weakness for rating specific to the text-to-image diffusion model community. Numerous academic papers accepted by top conferences/journels have internal datasets. We provide a table on text-to-image diffusion models here:
| Model | Venue | Internal data | GPUs/TPUs |
|----------------|------------------------------|---------------|-----------|
| Ours | N/A | Yes | 1k A100s |
| Imagen | NeurIPS'22 Outstanding Paper | Yes | 512 TPUs |
| ERNIE-ViLG 2.0 | CVPR'23 | Yes | 320 A100s |
**Notably, Imagen even received the outstanding paper award at NeurIPS last year.** We also understand ERNIE-ViLG 2.0 uses fewer A100s than us, but they continue to update the model from last year's September and we use their latest API to make comparisons. So it's difficult to compare the GPU hours.
Moreover, we intend to address this issue by releasing an API to make the model more accessible to the public. We firmly believe that RAPHAEL will contribute significantly to the advancement of text-to-image generation in the research community.
Other than the diffusion models mentioned above, **other text-to-image models also use internal datasets, such as MUSE (ICML'23), GigaGAN (CVPR'23), Parti (TMLR'22)**.
**Q2:The novelty is limited.**
As highlighted by reviewers PszJ, **"The space-MoE technique, specifically in the context of text-to-image generation, appears to be new and well-motivated, leading to a meaningful performance boost." and "The main technical contribution is well-motivated and novel."** Reviewer NLKN also claims "There are **two novel ideas** explored in the work: using spatial MoEs (for attended features) and using edge supervision for attention weights. Both ideas make sense and should be easily extendable to other setups. " Both reviewers acknowledge that RAPHAEL presents novelty.
The space-moe and edge-supervised learning in RAPHAEL are our original contributions, which conduct region-level refinement during the denoising process and significantly improve image quality. We have also implemented a gating function to enhance the performance of time-moe, as noted by reviewer PszJ: "I particularly appreciate the gating mechanisms, where, in the case of time-MoE, they automatically assign different timesteps to various time experts. Previous work manually assigned experts to different diffusion time intervals (e.g., eDiff-I)." Our experts are assembled on-the-fly during inference, which is more flexible and parameter-efficient.
**Q3: Working mechanism of time-MoE**
The space-moe and time-moe are disjoint. We experimented with different placements of space-moe and time-moe and obtained similar performances.
The feature processed by time-moe is passed on to space-moe. This feature serves as a better representation of latent features than before and is utilized in space-moe. The gating function takes text tokens as input to determine which experts should process the features, as output by the output of time-moe. Notably, the space-moe does not have any temporal dimension; it leverages the improved feature representation from time-moe.
**Q4: Time-moe in inference**
During the training process, the gating function is trained to automatically select the appropriate time expert. Once convergence is achieved, for example, at time step 1, expert 1 assists the UNet in fitting the score function. At time step 500, expert 2 is assigned to fit the score function. Therefore, during the sampling process, each time step is associated with a specific time expert to fit the score function. Although skipping of some time steps may occur during sampling when using DDIM or DPM solvers, the score functions for the selected time steps are accurately fitted with the assistance of these experts.
**Q5: Inference cost**
We provide an analysis in Section 4.2, which shows that the inclusion of space-moe results in an additional 24% overhead. This is faster than models with cascaded designs, such as Imagen and DeepFloyd. Furthermore, time-moe and edge-supervised learning do not introduce any extra inference cost.
We can also compare RAPHAEL with other popular models. All the models provided here are highly optimized, so we will choose our optimized version to compare (a bit faster than the results in Fig 6c because of the update of our infrastructure). We also use this same environment to conduct a fair comparison between RAPHAEL and Stable Diffusion XL. **Please refer to the pdf file in the general response for the results.** So we think the inference cost is not the bottleneck for RAPHAEL.
**Q6: The impact of internal datasets**
We incorporated internal datasets to improve the aesthetics of the generated images, a practice common among prestigious text-to-image models, including Imagen, Stable Diffusion XL, DeepFloyd, MUSE, Parti, ERNIE-ViLG 2.0, DALL-E 2, eDiff-I, etc.
The internal data primarily impacts the aesthetics of the generated images. Due to the limited time of the rebuttal period, we resume the checkpoint of RAPHAEL trained for two months, and continue to train it with LAION-5B for ~7 days instead of training from scratch. It's also reasonable because of the catastrophic forgetting properties of deep neural networks. Based on this model, we conduct a human evaluation using ViLG-300. The results indicate that most people (72.6%) prefer the model trained on internal datasets in terms of image quality. This preference is attributed to the poor image quality of LAION. However, we don't observe significant difference in image-text alignment.
Additionally, we measure the FID based on this model, resulting in 6.79. So the internal data doesn't have much influence on the coco-30k FID.
---
Rebuttal Comment 1.1:
Comment: I would like to first thank the authors for the rebuttal and for carefully responding to my questions and concerns.
I have read the rebuttal and (have always) acknowledged that the RAPHAEL is a technically solid work with its valuable contributions to the community. However, I also do not think the fact that other similar works, such as Imagen, have been previously awarded with the prize, or other large generative models have also used internal datasets is the expected answer to my raised concern (to be honest, it is never a question or concern specifically against RAPHAEL but a general research question/concern), in terms of reproducibility, model accessibility or performance. Personally, I would be more appreciative of works that actually improve our understanding of the model itself and can be beneficial and insightful for other practical and widely accessible applications.
Despite the above, I think my questions regarding the technical details have been answered, and thus raised my score after rebuttal.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: We deeply appreciate your valuable feedback. And we will incorporate the technical details uncovered during the rebuttal into the final version if the paper is accepted | Summary: The paper trains a large-scale latent diffusion model for image synthesis. It trains on a mix of LAION-5B (post-processed + filtered by aesthetic score) and in-house data. It proposes two novel technical contributions: 1) using a "spatial" mixture-of-experts (MoE) where an expert is predicted from each text token and processes the attended features; and 2) supervising attention weights via edge maps. Interestingly (and differently from most of the prior works), it also uses multi-scale training. Qualitatively, the generated images look substantially better than the ones from the baselines. It achieves SOTA FID on COCO, which is the main metric/benchmark for text-to-image generators.
Strengths: - The method achieves the very best known results for large-scale text-to-image synthesis (among those models which could be rigorously benchmarked against).
- There are two novel ideas explored in the work: using spatial MoEs (for attended features) and using edge supervision for attention weights. Both ideas make sense and should be easily extendable to other setups. For spatial MoEs, there are also test-time visualizations provided which helps in understanding their influence.
- The comparison to other methods is careful and thorough: a lot of non-cherry-picked qualitative results provided; human studies are performed, quantitative metrics are reported.
Weaknesses: I have two big concerns: 1) spatial MoE and edges supervision are not properly ablated; and 2) the paper does not contain enough technical details to be reproduced. I will elaborate on them below:
1. Improper ablations. After spending ~5 hours on reading the paper, it's still not clear to me where exactly the SotA FID score on COCO is coming from — architecture, data, or optimization, since they are all intermixed in the final model. Fig 6 denotes several ablation experiments, but it's not specified anywhere how were they trained. Does each run in Figure 6 was trained for 2 months on 1,000 GPUs as well? On the same dataset? Such lack of details about ablations makes them impossible to understand and analyze. For Figure 6c, what the FID scores for the line "Computational complexity" denote? This is confusing.
2. Lack of details. The current manuscript is something between an academic paper and a technical report. Here are some (of many) missing details:
- How exactly does your U-net and VAE look like? Are they equivalent to LDM ones, but with larger channel sizes? Or there are other modifications (apart from spatial/time MoEs)?
- How many images are in your in-house dataset, what are their resolutions and how it was collected?
- How exactly was multi-scale training implemented? What are the resolution distributions in your final dataset? Do I get it right that different batches on different GPUs have different amount of images in them (since the resolutions are different)? Do you allocate the same amount of GPUs per bucket? Do I get it right that your VAE is multi-scale, while the diffusion model is not? Or vice-versa?
- How random noise \epsilon is sampled for expert routing (L122)?
- What is the motivation of using focal loss for edge prediction instead of other loss types? Using focal loss here is quite non-intuitive to me. Did you try ablating it?
There are also several smaller (but still reasonable) concerns:
- It's not clear whether spatial experts reflect any text semantics. Judging by Figure 4, the expert assignments are completely random. I would expect to see that visually similar concepts ("tiger"/"cat"/"leopard"; "dog"/"wolf"/"fox"; "tv"/"monitor") would get assigned to the same expert. Could you please provide any support or refutation to such intuition (e.g., buy checking the clusters)?
- The writing quality could be improved. There are many variables introduced, and it would ease reading if they would be described in text, i.e., instead of writing "we set \lambda to X", one should write "we set learning rate \lambda to X". Otherwise, a reader needs to jump back and forth trying to recall the variable meaning.
- There is a quite confusing notation clash:
- \epsilon denotes random noise in diffusion and random noise (L74) in experts routing (L122)
- \alpha denotes variance schedule (L74), focal loss hyperparameter (Figure 6a), routing multiplier (L129)
- Limitations are not properly discussed (see the "Limitations" form below)
- GigaGAN is a missed reference since they also use MoEs, routed by text tokens.
- It's not clear why SR-GAN is included into the exposition since there is nothing special about it, and SR-GAN can be combined with any other image generator. Does it work better for RAPHAEL than for other image generators? If so, then it's interesting. But if not — then it's not clear why claiming that RAPHAEL can generate 4096 x 6144 images when combined with it. Following such arguments, what prevents one saying that StyleGAN can generate 100,000 x 100,000 images when combined with tailor-mode bilinear interpolation?
Typos and minor comments:
- L114: "mean of all experts" => "weighted average of all experts"?
I look forward to discussing my concerns with the authors and fellow reviewers and improving my rating.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I raised several concerns in the "Weaknesses" section and would be grateful to hearing the author's opinion on them. My main concerns are the lack of details about the method and experiments.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: There is a brief limitations discussion on potential negative societal impact. In this regard, I wouldn't demand more discussion from the authors since a potential misuse of powerful image generators is a well-known issue to the community and should be discussed at a "higher" level, rather than in this particular work.
However, it would be good to see the discussion of other potential limitations of the work, for example, what are the disadvantages of binarizing the attention maps? whether it is possible to ablate the model properly — at least via convergence plots for partial runs (I understand that training a full model for each ablation is infeasible)? what could be a problem with edge maps supervision of the attention maps (i.e., I guess there should be failure cases in edge map detection)? And so on.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer NLKN,
Thanks for giving so many constructive suggestions for our paper, I will clarify the settings.
**Q1: The setting of ablation study**
We conducted an ablation study in the following manner:
For Fig.6a, we resume the final model trained for two months, and train each point in Fig.6a for another 100M samples to ensure convergence, using the same dataset (LAION and internal datasets) and seed.
For Fig.6b, we resume the final model trained for two months and then individually deleted the space-moe, time-moe, and edge-supervised learning modules, resulting in three different models. Next, we continued to train each of these models without the respective module with 100M training samples **to ensure their convergence**, using the same dataset (LAION and internal datasets) and seed. **The implementations of space-moe and time-moe are both with residual connection, so the deletion operation is reasonable.** We measure the FID curves for these three models, and the results are presented in Fig. 6b.
For Fig. 6c, the red and blue lines represent the FID-expert curves (left axis) for space-moe and time-moe, respectively, showing that FID decreases with an increased number of space experts and time experts. We conduct this ablation study following the pipeline of Fig.6b. We resume the final model trained for two months and then individually deleted the space-moe or time-moe. We add new space-moe or time-moe modules according to the number of experts needed and train each setting for 100M training samples.
The green line in Fig. 6c represents the inference speed (DDIM steps/s, right axis) with an increased number of space experts. We provide an alternative way to speed up space-moe in our cluster. The definition of space-moe is as follows: $\frac{1}{n_y}\sum_{i=1}^{n_y} e_{\operatorname{route}(y_i)}(h'(x_t) \circ \widehat{\mathbf{M}}_i )$. Each token's corresponding feature will be routed to different experts naively, which is achieved through a "for" loop and cannot be optimized in our hardware setup. We propose an implementation approach as follows to address this:
1.We obtain a list $[a_1, a_2,..., a_k]$ = $[e_1(h'(x_t)), e_2(h'(x_t)),..., e_k(h'(x_t))]$.
2. The output of space-moe can be calculated as: $\frac{1}{n_y}\sum_{i=1}^{n_y} a_{\operatorname{route}(y_i)}\circ \widehat{\mathbf{M}}_i$.
The above implementation is always faster. We also ablated these two implementations, each trained with ~120M samples, and find that they exhibit similar performances in human evaluation and similar diffusion paths. The inference cost compared with other popular text-to-image diffusion models can be found in the pdf file of the general response.
**Q2: * How exactly does the U-net and VAE look like**
Please refer to the Q3 of our global response.
**Q3: In-house dataset**
Please refer to the Q1 of our global response.
**Q4: Implementation of multi-scale training**
Please refer to the Q2 of our global response for the detailed implementation.
**Q5: Choice of \epsilon.**
A small number 1e-6.
**Q6: Choice of Focal Loss**
Yes, we have explored other loss types and conducted ablation studies to address this. The most intuitive loss is cross-entropy loss. However, we encounter imbalanced edge-maps and background, where the prediction module tends to classify all pixels as background, negatively impacting the cross-attention maps. To overcome this, we decide to adopt Focal Loss, which effectively handles the imbalance issue.
**Q7: Semantics of space experts.**
We observe some patterns, similar concepts have similar diffusion paths (each concept generates 100 paths with our template). For example, smooth/glossy, minimal/minimalist, dreamy/dreamlike, happy/joyful, sad/gloomy, brave/courageous, tired/exhausted, bright/luminous, honest/sincere, puzzled/confused, brilliant/shining, grateful/thankful, harsh/severe, enormous/huge, humble/modest. We find the diffusion paths generated by each pair always share at least 11 experts out of 16 blocks.
For the diffusion paths given by visually similar but **different** pairs, such as tiger/cat/leopard (300 paths); dog/wolf/fox (300 paths); tv/monitor (200 paths), they also always share a relatively small number of experts (less than 7). We guess it is because they don't have similar semantics.
**Q8: Writing quality, notations, references and SR-GAN**
Thanks for pointing out these issues. We will polish it in the future since we can't edit the paper in the openreview now.
We will also modify the claim about SR-GAN.
And sorry for missing this excellent paper GigaGAN. We will add it once we can edit the paper.
**Q9: Limitations**
Indeed, we acknowledge several limitations in our paper that require attention. One limitation is the direct binarization of the attention map, which may result in the loss of some information. An adaptive module should be proposed to address this issue effectively. Additionally, the performance may be affected by failure cases of the edge detector, leading to potential degradation. We plan to explore solutions for these limitations in our future work.
Another limitation pertains to the design of our ablation study. Conducting each full setting in the ablation study would be prohibitively expensive and time-consuming. As a result, we opt to run partial settings for the ablation study, which still takes 1.5 months to complete. We recognize the need for a better and more efficient approach to conducting ablation studies of foundation models in the research community.
During the 7-day rebuttal period, it is almost impossible to run so many experiments for re-plotting the convergence curves for Fig. 6. However, based on our experience, we observe that edge-supervised learning can converge in less than three days. On the other hand, space-moe and time-moe converge much slower, requiring at least two weeks, given the setting of 6 space experts and 4 time experts.
---
Rebuttal Comment 1.1:
Title: Follow-up questions
Comment: Thank you for your response, it helped me to understand your work much better and resolved several concerns.
I would like to clarify a couple of more questions if that's possible:
1. Do I get it right that for your multi-scale training, the diffusion UNet model also operates on inputs/outputs of different resolutions? E.g. an image of shape [3, 448, 832] is encoded into the latent code of shape [12, 56, 104]? Does this also mean that there is an attention layer somewhere inside the UNet running on a 7x13 resolution? Or you use different downsampling factors inside the UNet?
2. What does "Use checkpoint" mean in Table 2 in the appendix?
3. How many overall steps/epochs do you do over the course of your 2 months training?
For my previously raised concerns:
- About Q1 (ablations). Honestly, the provided ablations are quite unusual, since the performance might drop after removing the components/objectives simply because it's a too drastic change in the architecture (even with the presence of residual connections). For Figure 6a, do I get it right that you trained the model with $\alpha=0.2$, $T_c=500$, and then, after 2 months of training, fine-tuned for other hyperparameter values? If so, how did you know that $\alpha=0.2$, $T_c=500$ were optimal?
- About Q7. Is it possible to attach any plots to the rebuttal or update the previously attached PDF? I would be curious to see the diffusion paths for those kinds of prompts.
---
Reply to Comment 1.1.1:
Title: Response to follow-up questions
Comment: Thanks for your follow-up questions. I'm delighted to address your concerns.
**Q1: Multi-scale Training**
Yes, the diffusion UNet model also operates on inputs/outputs of different resolutions.
Yes, there is an attention layer somewhere inside the UNet running on a 7x13 resolution. Consequently, this necessitates the scales of buckets to be divisible by 64. Notably, my experience has shown that cropping images to a fixed scale (e.g. 640) can **destroy** the performances of text-to-image diffusion models. Therefore, the importance of adopting a multi-scale training approach becomes more pronounced.
Based on my experience, fine-tuning a model trained at a fixed scale (e.g. 640 x 640) to a multi-scale version takes 5 days.
**Q2: Use of Checkpointing**
The term "use_checkpoint" originates from the configurations within the stable diffusion (unet_config). Pytorch Checkpointing is employed to conserve memory and mitigate CUDA out-of-memory errors.
**Q3: Iterations**
Our model has undergone training for approximately 1.02 million iterations, equivalent to roughly 2.9 epochs.
**Q4: Ablation Study**
**Experimental Setup**
Firstly, the objective of the ablation study is to validate the efficacy of each module outlined in our paper. The deletion operation works very well in our experiments. Fig. 6c provides further affirmation – **by resuming training from the same initial point and progressively increasing the number of experts, a consistent reduction in FID is observed. This unequivocally confirms the efficacy of both space-moe and time-moe.**
The deletion of the edge-supervised learning module doesn't significantly alter the architecture, and this module exhibits relatively rapid convergence.
Secondly, we believe this to be the **optimal** approach for ablating RAPHAEL. This is particularly because our model is designed for a **text-to-image** and **large-scale** image generator to measure the **zero-shot** FID. Running these partial settings of ablation study has also entailed a substantial investment of 1.5 months and millions of dollars.
**Tuning $\alpha$ and $T_c$**
Regarding these two hyperparameters, we adopt a proxy-based strategy to identify the optimal values before commencing full-scale training.
As both space-moe and edge-supervised learning align with stable diffusion, we incorporate them into stable diffusion v1.4. This involves training each configuration over a 14-day period, and subsequently searching the hyperparameters based on the evaluation outcomes. It's important to note that this isn't an ablation pertaining to RAPHAEL but rather an investigation based on stable diffusion. As such, the inclusion of this aspect in our paper is unfeasible. However, if necessary, we can upload this part to the appendix for the camera-ready version if the paper will be accepted since we can't edit it now.
This practice is widely adopted within the foundation model community prior to embarking on full-scale training.
**Q5: Diffusion Path**
I have lost the permissions to edit any contents of my paper, PDF files, and even the preceding rebuttal during this author-reviewer discussion phase.
I will definitely add these visualization results to our camera-ready version to facilitate the community and enhance the accessibility of our work if the paper will be accepted.
Please let me know if you still have more concerns, and I'm very happy to discuss them with you. | Summary: The paper proposed RAPHAEL, a text-to-image diffusion model. The model adopt MoE layers, including space-MoE and time-MoE layers. In addition, edge-supervised learning is proposed to enhance performance. RAPHAEL establishes a new state-of-the-art with a zero-shot FID-30k score of 6.61 on the COCO dataset, and surpasses its counterparts in human evaluation on the ViLG-300 benchmark.
Strengths: 1. The authors promised to release a programming API for RAPHAEL to the public.
2. Explore spatial- and time-moe, and perform some abaltion study.
3. visualize spatial- and time-moe in the appendix.
4. achieved sota zero-shot FID score on coco.
Weaknesses: As a paper focusing on pretraining, more clarification of experiment details is needed. Including:
a. data. The paper mentioned that "The training dataset consists of LAION-5B and a few internal data.". How many internal data is used? Its category distribution and collecting sources?
b. model structure and hyperparameters. including VAE, and each stage of diffusion model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is the space-moe performed only on text-image cross attention? How about applying moe to self-attention within image?
2. In abstract the authors mentioned that "RAPHAEL exhibits superior performance in switching images across diverse styles" (L13). Is it benefit from spatial-moe? if so, is there any pattern in moe paths regarding different styles?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: see weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 8jJJ,
Thanks for appreciating our work and your advice. We will address your concerns below.
**Q1: The paper mentioned that "The training dataset consists of LAION-5B and a few internal data.". How many internal data is used? Its category distribution and collecting sources?**
The data consists of approximately 440 million entries filtered from LAION-5B and approximately 290 million entries from internal datasets, which is less than what Imagen's dataset contains. These datasets possess special characteristics, notably high-quality and aesthetics.
To collect our internal datasets, we follow the methodology of DALL-E [1]. We curate a dataset on a scale similar to JFT-300M by sourcing images from the Internet. We remove instances with aspect ratios outside the range of [1/2,2], and we follow Stable Diffusion v1.4 to filter out images with low aesthetics scores. For captioning these images, we utilized BLIP-2. The main reason for constructing such an internal dataset with **high aesthetics** is to compensate for the poor quality of LAION. We don't add additional limits on categories but want to collect datasets with high aesthetics.
Regarding the resolution distribution, adopting the buckets mentioned in the global response Q2, it is as follows: [52,24,161,470,52,81,56,70,34]
Furthermore, in the realm of text-to-image generation, most papers use a combination of internal datasets. For example, Imagen (NeurIPS'22 Outstanding Paper) employs 440M internal data and 400M public data; ERNIE-ViLG (CVPR'23) utilizes LAION-5B and internal Chinese text-image pairs; MUSE (ICML'23) uses the same dataset as Imagen; and GigaGAN (CVPR'23) also leverages Adobe's internal data for its upsampler.
**Q2: Model structure and hyperparameters. including VAE, and each stage of diffusion model.**
The VAE model structure is based on the setup of Stable Diffusion, utilizing a KL-based VAE. Following the pipeline of LDM, an additional discriminator is introduced to train the VAE. And the downsampling ratio is 8, and z channel size is 12. We also change the ch (hyper-parameter of the LDM's VAE) from 128 to 256.
Regarding the hyperparameters for the diffusion models, our approach consists of a single stage. We adhere to the UNet configurations of stable diffusion v2.1, with the exception of disabling the self-attention module in the largest resolution due to its computational complexity.
We plan to include these details in our paper, and as soon as we gain access to edit it in openreview, we will upload the information accordingly.
**Q3: Is the space-moe performed only on text-image cross attention? How about applying moe to self-attention within image?**
The space-moe operation is specifically designed for text-to-image generation. Its purpose is to depict different *text* concepts within specific image regions. Thus, a *text* token is required to perform the cross-attention operation effectively.
However, when it comes to the self-attention module, the self-attention map always resembles the "contour" of the image and does not have a direct mapping relationship to a particular token. As a result, the space-moe operation cannot be applied to the self-attention module. It can be applied to the cross-attention mechanism because it excels in capturing the correlations between textual descriptions and corresponding image regions.
**Q4: In abstract the authors mentioned that "RAPHAEL exhibits superior performance in switching images across diverse styles" (L13). Is it benefit from spatial-moe? if so, is there any pattern in moe paths regarding different styles?**
Yes, it benefits from space-moe.
Firstly, as shown in Fig. 6b, space-moe significantly increases the CLIP score, which measures the alignment between images and text descriptions.
Secondly, we observe some patterns (each concept generates 100 paths with our template). For style concepts such as anime/digital/realistic/cyberpunk/artistic/colorful/minimalist/bright, etc, we find that each pair in these styles (such as anime/digital, digital/realistic, etc) always share a relatively small number of experts (less than 7). Moreover, adjectives provided in our appendix also contain many style concepts, and they can be easily classified by XGBoost algorithm.
So we think different style concepts have different diffusion paths.
Thirdly, we conducted a human evaluation using the model with and without space-moe based on prompts containing style information from the ViLG-300 dataset. The results indicate that most people (76.15%) tend to prefer the model with space-moe, as they believe it better matches the images with the specified style.
Finally, we also observe that diffusion paths reflect semantics, similar concepts/styles have similar diffusion paths. For example, smooth/glossy, minimal/minimalist, dreamy/dreamlike, happy/joyful, sad/gloomy, brave/courageous, tired/exhausted, bright/luminous, honest/sincere, puzzled/confused, brilliant/shining, grateful/thankful, harsh/severe, enormous/huge, humble/modest. We find the 200 diffusion paths generated by each pair share at least 11 experts out of 16 blocks. So we believe space-moe also helps the understanding of semantics.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. All of my questions have been addressed, and I'm still leaning toward accepting this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your comments
Comment: We deeply thank you for the kind support of our work! | Summary: This paper proposes RAPHAEL, a new text-to-image generative model, based on the latent diffusion model framework. The main methodological contribution is the use of space-mixture-of-experts (space-MoE) layers. These are layers that focus on different concepts from the text prompt in different spatial areas of the synthesized image. Different space-MoEs are automatically chosen for the different text tokens, and they are assigned to the relevant regions in the image, which can be found through the cross-attention maps. A similar time-MoE is also incorporated, although that is less novel. The model is validated on standard benchmarks and achieves state-of-the-art performance. Ablations over all relevant new components and hyperparameters are performed. The paper shows various qualitative results and model samples, which are quite impressive. The authors also demonstrate that RAPHAEL can be easily combined with ControlNet, LoRA fine-tuning, and super resolution GANs to enhance the image resolution.
Strengths: The main strengths of the paper are:
- State-of-the-art text-to-image generation performance, including visually impressive results.
- Extensive ablation studies on all relevant components.
- The space-MoE idea is well motivated, novel, and boosts performance non-negligibly.
- I like in particular the gating mechanisms in both the space- and the time-MoE, which, for instance in the time-MoE case, automatically learn to assign different timesteps to various time experts. Previous work manually assigned experts to different diffusion time intervals (for instance, eDiff-I).
**Clarity:** The paper is well written and easy to read and follow. There are no major concerns regarding clarity. However, some details seem to be missing (see below).
**Originality:** In general, the mixture-of-experts idea is not new and has existed in language models and was also used in the text-to-image literature, for instance in the related eDiff-I (only time experts). However, the space-MoE technique specifically in the context of text-to-image generation is new, to the best of my knowledge, and it is well-motivated and seems to meaningfully boost performance. Hence, while the paper's originality is not groundbreaking, the main technical contribution is well-motivated and novel.
**Significance:** Text-to-image generation is a highly relevant and impactful topic, and RAPHAEL achieves state-of-the-art performance in this competitive area. Its mixture-of-experts approach is well-motivated and may find wider adoption. Apart from the quantitative evaluations, its visual results are stunning. Hence, I think the paper is impactful and significant.
**Quality:** The overall quality of the paper is high. The paper is easy to read, appropriately discusses the related literature, provides a background section, runs extensive ablation studies for all new relevant parameters, and supports its claims by appropriate experiments. The qualitative and quantitative results are strong. There are only relatively minor concerns with respect to missing details (see below).
Weaknesses: The paper does not have any major weaknesses. However, I have some minor concerns:
1. Many details are missing:
- The paper only mentions on the side (Line 192) that it is using a latent diffusion model framework and does not operate in pixel space.
That is fine; however, this requires more details. For instance:
- Was the autoencoder regularized? The LDM paper uses either KL-based or VQ-based regularization.
- Was the autoencoder trained only with a reconstruction loss? Or also with a (patch-wise) discriminator?
- What was the downsampling ratio?
- The work should explain the multi-scale training in more detail. How exactly is the model trained at all these different resolutions and aspect ratios at once?
- The gating mechanisms incorporate an $argmax$ function. The $argmax$ is usually not differentiable. How did the authors deal with that, to enable regular backpropagation for training? I think it would be helpful to discuss this in a bit more detail.
- The paper says that RAPHAEL is trained on LAION "and a few internal data". What is this internal data? Even if this data is internal and not released, the authors should describe it and what value it brings on top of LAION. E.g. what is the size of that internal data? Does it have any special characteristics? Only high-quality, for instance? Etc.
2. A discussion on limitations is missing. This would further strengthen the paper.
In conclusion, the paper's main weaknesses are all related to missing details and I believe these issues can be addressed easily. I do not see any other major concerns. Considering the paper's strengths discussed above, I am consequently suggesting acceptance of the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I have only one minor question (just curiosity, not impacting the paper rating):
Figure 4 shows the diffusion paths for different simple concepts. Do more related or similar concepts also have more similar diffusion paths? For instance, would "strawberry" and "raspberry" share much of their paths, while "strawberry" and "car" would not?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Potential negative societal impacts have been briefly, but sufficiently addressed. Limitations have not been discussed. What are RAPHAEL's limitations? I would like to encourage the authors to add a critical discussion on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer PszJ,
Thank you for appreciating our approach. We will address your concerns below.
**Q1: Details of VAE**
Yes, we follow the setup of Stable Diffusion and use KL-based VAE. When training the VAE, we add an extra discriminator following the pipeline of LDM. And the downsampling ratio is 8, z channel size is 12. We also change the ch (hyper-parameter of the LDM's VAE) from 128 to 256.
**Q2: The work should explain the multi-scale training in more detail. How exactly is the model trained at all these different resolutions and aspect ratios at once?**
Thanks for your question. I will explain it in detail:
As outlined in our research paper, we employ a system comprising 9 buckets, each representing a distinct image scale. The initial step involves resizing an image to its nearest size within these predefined buckets. Subsequently, the allocation of GPU resources to each bucket is automated, based on the number of images they contain. This approach ensures efficient utilization of computational resources. Here is a step-by-step breakdown of the process:
1. Aspect Ratio List ($R$): We maintain a list ($R$) containing aspect ratios for all the images in our dataset. The length of this list corresponds to the total number of images.
2. Bucket List ($L$): We establish nine buckets ($L$) that encompass various image sizes, including [448, 832], [512, 768], [512, 704], [640, 640], [576, 640], [640, 576], [704, 512], [768, 512], and [832, 448]. For a given image ratio in $R$, we identify the nearest bucket size in $L$ through a matching process. For example, images with an aspect ratio of 1.0 will be associated with the bucket of size [640, 640].
3. Mapping Aspect Ratios to Buckets ($R1$): As a result of the previous step, we generate another list ($R1$) of the same size as $R$. Each element in $R1$ indicates the bucket to which the corresponding element from $R$ is assigned.
4. GPU Allocation: We proceed to compute the GPU allocation based on the information provided by $R1$. Firstly, we calculate the total number of images each bucket contains. Using this information, we create another list ($L1$) of the same size as $L$, and L1.sum() denotes the total images we have. Then, we utilize a simple trick to distribute these buckets across different GPUs. The following code snippet demonstrates this process: We provide codes below:
bk_gpu_nums = np.clip((L1 / L1.sum() * world_size).astype(int), 0, world_size),
bk_gpu_nums[bk_gpu_nums.argmax()] = bk_gpu_nums[bk_gpu_nums.argmax()] - (bk_gpu_nums.sum() -world_size),
Here, "world_size" denotes the total number of GPUs available, and "bk_gpu_nums" signifies the number of GPUs required for each bucket. Then we assign different GPUs to different buckets.
5. Image Selection: Each GPU within a specific bucket will select images from the dataset according to the mapping provided by $R1$, and this GPU will train the model with this bucket scale.
While we acknowledge that the "astype(int)" operation may involve approximation, it becomes negligible when dealing with a large number of GPUs and datasets.
Finally, it is essential to note that each GPU employs the same batch size during the training process.
**Q3: Implementation of $argmax$ function**
Given a vector $v$, the first step is to compute the softmax value $y_{soft} = softmax(v)$.
Next, the gating functions are determined using the $argmax$ operation, resulting in output $y_{hard} = argmax \ y_{soft}$.
To ensure differentiability and enable backpropagation during training, we adopt a technique that bridges the gap between the discrete and differentiable representations. We introduced a soft version of the gating function, denoted as $y_{hard'}$, which can be expressed as follows: $y_{hard'} = (y_{hard} - y_{soft}).detach() + y_{soft}$.
By using this softened version of the gating function, we could successfully perform backpropagation during training. Although the gating decisions were made based on $y_{hard'}$, the actual backpropagation process was executed on the differentiable representation $y_{soft}$.
**Q4: Internal data**
We have $\approx$ 440M data filtered from LAION-5B, and $\approx$ 290M internal datasets, which is less than Imagen's. Yes, it has special characteristics, such as high-quality, high-aesthetics.
To collect our internal datasets, we follow the methodology of DALL-E [1]. We curate a dataset on a scale similar to JFT-300M by sourcing images from the Internet. We remove instances with aspect ratios outside the range of [1/2,2], and we follow Stable Diffusion v1.4 to filter out images with low aesthetics scores. For captioning these images, we utilized BLIP-2. The main reason for constructing such an internal dataset with high aesthetics is to compensate for the poor quality of LAION.
Furthermore, most papers on text-to-image generation use a series of internal datasets. For example, Imagen (NeurIPS'22 Outstanding Paper) uses 440M internal data and 400M public data; ERNIE-ViLG 2.0 (CVPR'23) uses LAION-5B and internal Chinese text-image pairs; MUSE (ICML'23) uses the same dataset with Imagen; GigaGAN (CVPR'23) also uses Adobe's internal data for its upsampler.
**Q5: More properties of diffusion paths**
Yes, we observe that similar concepts have similar diffusion paths (each concept generates 100 paths with our template). For example, smooth/glossy, minimal/minimalist, dreamy/dreamlike, happy/joyful, sad/gloomy, brave/courageous, tired/exhausted, bright/luminous, honest/sincere, puzzled/confused, brilliant/shining, grateful/thankful, harsh/severe, enormous/huge, humble/modest, etc. We find the 200 diffusion paths generated by the similar concept pair always share at least 11 experts out of 16 blocks.
But for different concepts, such as strawberry/car, they are **different** things always share a small number of experts (less than 7).
**Q6: Limitations**
Please refer to the Q9 of reviewer NLKN, and we have discussed limitations in detail.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: I would like to thank the authors for their rebuttal and for providing extensive details to answer my questions. It would be great to incorporate all these explanations into the final version of the paper. I do not have any further questions or concerns. I have been already positive about the paper and still suggest acceptance.
---
Reply to Comment 1.1.1:
Title: Thanks for your comments
Comment: We sincerely thank the reviewer for the kind support of our work! We will incorporate the details into our final version. | Rebuttal 1:
Rebuttal: General response:
We express our sincere appreciation to all the reviewers for their valuable time and efforts in reviewing our paper. We are delighted to learn that the reviewers have generally recognized and appreciated the contributions made in our work, which include:
1. State-of-the-art performance (PszJ, 8jJJ, NLKN, and fnVX).
2. Introducing two novel ideas, namely, space-moe (PszJ, NLKN) and edge-supervised learning (NLKN).
3. Extensive ablation study (PszJ, 8jJJ, fnVX) and visualization results (8jJJ, NLKN).
4. Thorough comparison to other methods (NLKN).
We extend our gratitude to all the reviewers for their insightful and constructive suggestions. The main concerns raised by the reviewers revolve around our experimental settings. To address these concerns, we provide the following summaries and discuss them in rebuttal:
1. Internal datasets (PszJ, 8jJJ, NLKN, fnVX).
2. Multi-scale training implementation (PszJ, NLKN).
3. VAE and U-Net configurations (PszJ, 8jJJ, NLKN).
4. Ablation study settings (NLKN).
5. Choice of Focal Loss (NLKN).
6. Additional properties of diffusion paths. (PszJ, 8jJJ, NLKN)
7. Inference cost. (fnVX)
We are committed to incorporating these improvements and addressing all the raised concerns in our revised paper once we can edit it in openreview.
We will also make a global response for the Q1,Q2,Q3 above:
**Q1: Information about internal dataset**
The data consists of approximately 440 million entries filtered from LAION-5B and approximately 290 million entries from internal datasets, which is less than what Imagen's dataset contains. These datasets possess special characteristics, notably high-quality and aesthetics.
To collect our internal datasets, we follow the methodology of DALL-E. We curate a dataset on a scale similar to JFT-300M by sourcing images from the Internet. We remove instances with aspect ratios outside the range of [1/2,2], and we follow Stable Diffusion v1.4 to filter out images with low aesthetics scores. For captioning these images, we utilized BLIP-2. The main reason for constructing such an internal dataset with **high aesthetics** is to compensate for the poor quality of LAION. We don't add additional limits on categories but want to collect datasets with high aesthetics.
The resolution distribution is [52,24,161,470,52,81,56,70,34] if we adopt the buckets in Q2 and 1k GPUs. It indicates, for example, there are 52 GPUs for [448,832] and each GPU shares a similar number of images.
**Q2: Implementation of multi-scale training**
As outlined in our research paper, we employ a system comprising 9 buckets, each representing a distinct image scale. The initial step involves resizing an image to its nearest size within these predefined buckets. Subsequently, the allocation of GPU resources to each bucket is automated, based on the number of images they contain. Here is a step-by-step breakdown of the process:
1. Aspect Ratio List ($R$): We maintain a list ($R$) containing aspect ratios for all the images in our dataset. The length of this list corresponds to the total number of images.
2. Bucket List ($L$): We establish nine buckets ($L$) that encompass various image sizes, including **[448, 832], [512, 768], [512, 704], [640, 640], [576, 640], [640, 576], [704, 512], [768, 512], and [832, 448]**. For a given image ratio in $R$, we identify the nearest bucket size in $L$ through a matching process. For example, images with an aspect ratio of 1.0 will be associated with the bucket of size [640, 640].
3. Mapping Aspect Ratios to Buckets ($R1$): As a result of the previous step, we generate another list ($R1$) of the same size as $R$. Each element in $R1$ indicates the bucket to which the corresponding element from $R$ is assigned.
4. GPU Allocation: We proceed to compute the GPU allocation based on the information provided by $R1$. Firstly, we calculate the total number of images each bucket contains. Using this information, we create another list ($L1$) of the same size as $L$, and L1.sum() denotes the total images we have. Then, we utilize a simple trick to distribute these buckets across different GPUs. The following code snippet demonstrates this process: We provide codes below:
bk_gpu_nums = np.clip((L1 / L1.sum() * world_size).astype(int), 0, world_size),
bk_gpu_nums[bk_gpu_nums.argmax()] = bk_gpu_nums[bk_gpu_nums.argmax()] - (bk_gpu_nums.sum() -world_size),
Here, "world_size" denotes the total number of GPUs available, and "bk_gpu_nums" signifies the number of GPUs required for each bucket. Then we assign different GPUs to different buckets.
5. Image Selection: Each GPU within a specific bucket will select images from the dataset according to the mapping provided by $R1$, and this GPU will train the model with this bucket scale.
While we acknowledge that the "astype(int)" operation may involve approximation, it becomes negligible when dealing with a large number of GPUs and datasets.
Finally, it is essential to note that each GPU employs the same batch size during the training process. This comprehensive approach enables effective multi-scale training, resulting in enhanced performance and robustness of the model.
Given 1000 GPUs and the predefined buckets, the final resolution distribution (including LAION and internal datasets) of these buckets is [28,67,116,419,43,52,76,112,87], which means there are 28 GPUs in bucket [448, 832].
Hence, the different bucket has different images and different number of GPUs. Each GPU will sample from its belonging bucket. The VAE and diffusion models both need multi-scale training.
**Q3: Configs of VAE and U-Net**
For U-Net, the difference between ours and Stable Diffusion v2.1 is we disable the self-attention module for the largest resolution due to its computation complexity. For VAE, the downsampling ratio is 8, z channel size is 12. We also change the ch (hyper-parameter of the LDM's VAE) from 128 to 256.
Pdf: /pdf/ff130c625f76f9c7c12d31dd540d127725dc10c2.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
SPRING: Studying Papers and Reasoning to play Games | Accept (poster) | Summary: Open-world survival games such as Crafter pose a significant challenge for RL agents. In this paper, the authors propose a novel framework called SPRING that studies Crafter's latex paper source code and uses the knowledge to reason and play the game through large language models (LLMs) such as GPT4. SPRING has two phases:
1. Phase 1 is studying the paper. SPRING extracts key information relevant to the game from the LaTeX source code, and summarized them in bullet list forms.
2. Phase 2 is reasoning and playing the game. SPRING uses the summarized information in Phase 1 as the context of the LLM, and formulates gameplay-relevant questions as a directed acyclic graph (DAG). When traversing through the DAG, the agent only looks at the questions and answers in the parent nodes to reduce context length which helps the agent focus better. After the DAG of questions the agent finally outputs an action to be executed in the game.
The authors then did a comprehensive ablation study showing that 1) using the DAG of questions is important, 2) using extracted / cleaned information from the paper is important, and 3) using GPT-4 is much better than using GPT-3.5
Strengths: The paper is well-written and presents a carefully designed ablation study. The evaluation solidly demonstrated that the two phases of SPRING contributed to a higher reward. The approach to use LM is also highly novel in open-world survival games.
Weaknesses: The paper is quite well-done, but maybe one weakness is its reliance on proprietary models such as GPT4. In 5 years or so, it is unclear whether OpenAI will still make the (then probably deprecated) GPT4 available, so this work could become more difficult to reproduce.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you include some gameplay and output examples in the appendix? They could help the readers better understand the flow of the system. Could you also disclose the computational cost associated with GPT-4 API calling used in the paper?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the solidness of our work. Here are our responses to the questions and concerns:
W1 proprietary models:
Thank you for pointing out this important concern. We do notice that SPRING gets better after switching from GPT-4-0314 to GPT-4-0613, and we are hopeful that we get better performance with every new version. However, we are also actively experimenting with open-source LLMs like LLAMA-2 and we are hopeful to make a switch to an open-source LM in the future.
Q1 Examples and Gameplay:
We will include some rollout examples of SPRING in the final paper.
The number of queries per step is 9 (same as the number of questions). Each game can take around 300 steps, but can go up to 500 steps in the worst case. Therefore, the maximum number of queries per game can go up to 4500. According to the public price of GPT-4 API, each query costs around 0.06. The total cost should be less than 270 (USD) with GPT-4. Note that GPT-4 cost will be cheaper with academic discounts and we expect the cost to decrease as the community makes progress on LLMs. | Summary: This paper proposes SPRING to read the game’s original academic paper and use the knowledge learned to reason and play the game throught a large language model (LLM).
Strengths: 1. This paper is well-writen and easy to follow.
2. The idea of reading paper with LLM to interact with environment directly is interesting.
Weaknesses: 1. Minecraft's open-world nature, extensive online community discussions (e.g., Reddit) and comprehensive wikis raise questions about why SPRING chose to use academic paper latex code for prompts instead of utilizing more abundant online corpora. It would be necessary to conduct a suitable ablation experiment.
2. The experiments conducted in the paper are insufficient. Currently, the paper only tests SPRING in the simplified 2D world of "crafter" with a modified action space. It is essential to investigate SPRING's performance in a real open-world environment, as the paper frequently references Minecraft. Experiments should be conducted in environments like Minedojo or Minerl.
3. Directly using LLM for action output may raise concerns about efficiency. Given the current parameter scale of language models and the multi-stage question-answering (QA) used by SPRING, can SPRING achieve real-time action output at 20fps in the crafter environment? Merely listing training steps is insufficient; please provide the average inference time consumption for all methods in Table 2.
4. Although the crafter environment itself is not part of GPT-4's training data, the rules used in crafter are fundamentally similar to Minecraft (e.g., using a pickaxe to mine stone), making it challenging to claim that the SPRING method is genuinely out-of-distribution (OOD).
5. In the QA-DAG section, the paper employs multiple manually designed questions as nodes, some of which may leak game rules, such as q1, q3.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please address the questions raised in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty of our work. Here are our responses to the questions and concerns:
W1 Online Copra:
No other online corpora exist for crafter since it is an academic benchmark environment. This benchmark also provides us the chance to test our framework on unseen data. Data on Reddit and online corpora are most likely already seen by GPT-4 during its training process.
W2 Insufficient Evaluation:
Thank you for this insight and we plan to add Minedojo as a part of future work. However, given the fact that it is significantly harder to 'describe' Minecraft environments to LLMs, we make the decision to start with a simplified environment which abstracts out some complexity on the vision side.
Although our ultimate goal is to solve Minecraft, we are excited to share our progress with the community. We would like to re-iterate that Crafter greatly resembles Minecraft in the tech-tree aspect and our work shows that LLMs can understand and excel in games with complex tech-trees, which is a first step towards solving Minecraft.
W3 Inference Speed:
Current inference time depends on the speed of the OpenAI API, which involves HTML and therefore has slower inference time than some locally trained RL algorithms depending on the local machine capability. However, this also suggests that our inference can be done on any device (even raspberry pi) with an internet connection. Therefore, we don't believe a comparison would be helpful for understanding the different algorithms.
However, we do recognize the necessity to reduce API calls for LLMs, and we are actively working on a more API-efficient solution as our future work.
W4 OOD:
Thank you for raising the concern. We will change the wording from OOD to 'unseen'.
In addition, please note that GPT-4 encounters difficulties playing the game of Crafter without reading the paper (Table 3 row 4).
GPT-4 encounters more 'hallucinations' without the context due to the fact that rules are fundamentally similar to Minecraft.
W5 prompts:
Thanks for raising the concerns. We will have GPT-4 write the questions for our future works.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer,
Thank you again for helping us improve our paper. We hope that our response has addressed your concerns.
Please let us know if there are any additional questions or concerns. | Summary: The authors propose a method, SPRING, to design an agent based on LLM question answering for the Crafter environment. Their method uses the Hafner 2021 paper describing the Crafter environment and its objectives as context to prompt the LLM about the next best steps in a chain-of-thought fashion using a DAG to organise the sequence of prompts, based on text descriptions of the environment provided by an image to text visual descriptor. SPRING outperforms other approaches, including some using LLMs as components.
Strengths: 1. A logical and interesting approach based on having Hafner 2021 as a good “game guide” for the player. Agent modelling with LLM has been a promising area of research, so I expect this paper to be of interest to a large audience. Thanks to the question-answering and chain-of-thought approach, this approach provides some explainability. What’s more, the authors show that their chain-of-thought approach with the DAG improves on other forms of chain-of-thought, highlighting the importance/impact of guiding the chain-of-thought process with their methodology.
2. The writing is good and clear.
3. I appreciate the authors keeping perspective by using quotation marks for reasoning.
Weaknesses: 1. The comparison to RL algorithms had to be done. However, it is important to keep in mind that SPRING is operating with explicit knowledge of “solutions”, or at least a guide, which is further assisted by the chain-of-thought prompting with the DAG. In a game where the difficulty is not mechanical, and which is about finding the right steps, at the risk of anthropomorphising the algorithm, it sounds clear to me as a human that having the details from Hafner 2021 “cheeses” the game. Pure RL agents in comparison have much more to learn by themselves. A corollary is that I might trust RL agents more for generalisation, due to exploration (the total failure of “place stone” is a good indication that the model is just following Hafner 2021 as a guide). This operates at two levels: your method has access to explicit formulations of the objectives and rewards that your benchmark is based on, and it has access to a guide on how to accomplish said objectives.
2. I would replace the claim L47 that the environment is OoD. Being unseen doesn't mean it's out of distribution, I would correct it to "unseen". For example a model trained on StarCraft 1 would probably be decent at StarCraft 2 because StarCraft 2 corresponds to a distribution that's shifted but probably overlaps to a large extent with that of StarCraft 1. Same for a model trained on shooter games. It may very well be the case here that a lot of knowledge from the LLM pretraining transfers well (again at the risk of anthropomorphising the model, but think about a player used to similar video games as crafter vs someone who never played a video game. Wouldn’t the video-game aficionado progress much faster ? That would be mostly due to the fact that past experience transfers in spite of the environment being unseen).
3. Table 3: if possible, please indicate scores of each modality too.
4. Abstract: in summarising your results, please specify you’re evaluating on Crafter (I assume you intended to specify it originally, as otherwise the sentence “We propose a novel approach, SPRING, to read the game’s original academic paper” lacks meaning). This is important since your methodology is tailored to Crafter, using Hafner 2021 with all benchmarks being on Crafter.
I believe adding a discussion of 1. for SPRING vs other baselines would be very valuable to readers.
**Typos and suggestions:**
1. Missing whitespace in abstract between node and directly
2. L64 "independently in."
3. L69 "performance performance"
4. Figure 3: Grass 1 step no s
5. L136 simpler instead of more simple
6. L196 “achives”
7. L306: “reliance of” -> on
8. Table 2: Just a remark, not affecting my judgement: it’s important to keep in mind that the improvement in reward over DreamerV3 is not statistically significant.
9. Suggest using citep for references that aren't grammatical parts of sentences, will improve legibility (e.g. “It has been done previously [Whatever et al.]” instead of “It has been done previously Whatever et al”).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Will you be providing code for reproducibility ?
2. What is the success rate computed from in 3.2? Multiple runs?
3. It is not clear whether Figure 4 of Hafner 2021 is read by the LLM somehow (specifically the arrows indicating achievement orders). Can you clarify how explicitly the LLM is provided with the tech tree dependencies please ? Is there another source for that knowledge than Figure 4 of Hafner 2021 (because surely the tokeniser can’t process the arrows) ?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our chain-of-thought approach and recognizing the potential of our work. Here are our responses to the questions and concerns:
W1 Game complexity:
Thank you for pointing out an important aspect that the Crafter lacks mechanical difficulty, which is indeed the intended goal of our paper. However, this does not mean that the game would be obvious for human players who read the paper carefully. The instructions (Hafner et al.) only cover the high-level game mechanics, i.e., the tech-tree and the enemies. It is unclear where to find the resources and when you would encounter a zombie/skeleton. The exact speed and movements of important objects: cow, zombie, skeleton, arrows (shoot by skeleton) are not documented in the paper. In fact, the paper did not even mention that 2 woods (as opposed to 1) are required to build a crafting table. All of the above missing low-level details still pose challenges to LLMs and novice human players.
In addition, Hafner et al. only demonstrated the existence of many conflicting objectives like: "Do I explore or build shelter.", "Do I focus on survival stats or do mining?", but never provided a solution. Our experience is that it may take a human player several trials to reach a score higher than our LLM due to some of the difficulties mentioned above.
In regards to SPRING vs. RL in the generalization setting. We think that it is hard to say if one is better than the other.
On one hand, RL algorithms are trained with reward functions deliberately engineered to cover all in-game achievements. Engineered reward functions often require a lot of expert knowledge and careful formulation.
On the other hand, SPRING does not need the reward (we report reward for comparison purpose, SPRING does not use the reward during inference), but instead uses external knowledge (Hafner et al.).
We will include a discussion of SPRING vs. RL in the final paper. However, we will not be able to include more insights (than Table 3, row 2) on how LLM adapts to some of the missing low-level inaccuracies since we are still actively working in this direction.
W2 OOD:
Thank you for pointing out this inaccuracy. We will change our phrasing of OOD to unseen in the final version
W3 Multi-modality:
Apologies for any potential misunderstanding. Do you mean removing the visual modality? The visual modality (visual descriptor) is necessary for the agent. Without the visual modality, it will not know where the resources are and if it is facing any danger, which are fundamental to success in the game.
W4 Abstract:
Thank you for pointing out this inaccuracy. We will update the abstract to explicitly mention that we only benchmark on crafter in the final version.
Misc:
Thank you for pointing out minor improvements in our paper. We will correct them in the final version.
Q1 Code:
Yes, we will release the code upon publication.
In addition, we have already attached ALL prompts used in the main paper.
Q2 Success Rate:
Success rate is calculated as (number of times an achievement is achieved)/(total number of experiments). In our case the total number of experiments is 5.
Q3 Figure 4 from (Hafner et al.):
Figure 4 is not read since the format is not in LaTeX.
However, the same information is also in Appendix Table F.1 of the paper which is in a fully readable LaTeX table format.
We will add clarification to this confusion in the final version of our paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed reply.
> All of the above missing low-level details still pose challenges to LLMs and novice human players.
In addition, Hafner et al. only demonstrated the existence of many conflicting objectives like: "Do I explore or build shelter.", "Do I focus on survival stats or do mining?", but never provided a solution. Our experience is that it may take a human player several trials to reach a score higher than our LLM due to some of the difficulties mentioned above.
Reviewing aside, I would argue that this is precisely because a human would try exploring "policies", which will only pay off in the longer run. I agree that reward modelling is where a lot of the complexity of RL gets pushed.
> Apologies for any potential misunderstanding. Do you mean removing the visual modality? The visual modality (visual descriptor) is necessary for the agent. Without the visual modality, it will not know where the resources are and if it is facing any danger, which are fundamental to success in the game.
Sorry, I meant for each method.
I have increased my score (7 -> 8).
---
Reply to Comment 1.1.1:
Comment: Thank you for the recognition of our work.
- Exploration.
Thank you for identifying this important direction. As a future direction, we have been studying "exploration" with LLMs similar to how a human player would explore and learn.
- Score for Table 3
For Table 3, we made the decision to not calculate score since the scores were designed for benchmarking the exploration (how many achievements an agent unlocks) and training speed (when does an agent unlock achievements) of RL policies.
None of which are relevant for LLM-based policies which were not trained on Crafter. Therefore, we instead focus on metrics which demonstrate achievement sophistication (achievement depth), and achievement amount (reward).
Thank you for pointing out this omission, we will add clarification for our choice of metrics in the final version of the paper. | Summary: SPRING is a method that reads a game (Crafter)'s original academic paper and reasons to play the game with a large language model (LLM).
1. study the paper: a fixed set of questions are used to summarize gameplay and action space information, with parapraph-level decomposition and then question-wise aggregation.
2. reasoning: a fixed list of 9 questions in a DAQ are answered by LLM using game information (described in text) and answers from parent questions.
Results are impressive --- such a zero-shot model beats RL baselines with 1M training steps, also with a smaller variance.
Strengths: - It is an important and popular direction now to study how to implement RL agents using LLM, so as to avoid typically millions of training steps and problems like generalization and robustness. The paper makes great progress in this direction, significantly beating RL baselines in the game of Crafter.
- Components are reasonable and fairly novel. Can imagine these techniques be useful for other games as well.
- Good component and ablation studies, and good presentation.
Weaknesses: - Since only one game is studied, it is harder to understand how much of performance comes from many human inductive bias in designing the pipeline (e.g. questions chosen, descriptor ad-hoc rules, etc.) --- these things might be strong enough priors that LLM just serves as a symbolic processor, and might even be replaceable with some simple rules. It would be cool (but hard) to see to what extent LLM itself can just be also replaced by some heuristic rules to solve the game --- and if it performs poorly, the use of LLM (and hardness of game) is more justified.
- I'm a bit surprised (maybe not that surprised) "Learning to Win by Reading Manuals in a Monte-Carlo Framework" is not cited, which is probably the groundbreaking work in this direction from 12 years ago.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - what's the cost/speed for GPT-4 experiments?
- is there any statistics for RL performances beyond 1M steps, which might not be trained enough?
- typo: line 81 "key aspects the visual observation"; line 64 "independently in"
- use \citep
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: limitation about visual descriptor is mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty and importance of the direction of our work. Here are our responses to the questions and concerns:
W1 Rule-based agents:
Please note that the complexity of the game often goes beyond human inductive bias. The game involves a tech-tree that resembles Minecraft. It is unclear if a rule-based agent exists as there are often conflicting objectives like: "Do I explore or build shelter.", "Do I focus on survival stats or do mining?"
If huge effort can be put into writing a rule-based agent, then similar efforts might be used to improve the context prompt to the LLM (which is currently automatically extracted straight from the paper)
We will provide more example trajectories of the game in the appendix to demonstrate the complexity for the final version.
W2 Missing citations:
Thank you for pointing out the missing citation. We will add this to our final version.
Q1 Cost/Speed of LLM:
The number of queries per step is 9 (same as the number of questions). Each game can take around 300 steps, but can go up to 500 steps in the worst case. Therefore, the maximum number of queries per game can go up to 4500. According to the public price of GPT-4 API, each query costs around 0.06. The total cost should be less than 270 (USD) per game with GPT-4. We would also like to point out that as this area of research is becoming increasingly popular, the cost per query is becoming cheaper and the inference is becoming faster.
The actual speed of the agent largely depends on the speed of the LLM API.
Q2 1M training steps:
The 1M steps is a rule set by the original game developers (Hafner et al.) https://arxiv.org/pdf/2109.06780.pdf. Therefore, no scores exist for beyond 1M steps.
Misc:
Thank you for pointing out minor improvements in our paper. We will correct them in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks. I want to keep the score as is for now. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases | Accept (spotlight) | Summary: Shorcut learning has received increasing attention from the community recently. The paper proposes measuring and ranking data by "spuriosity," or the degree to which relevant spurious cues are present, as a way to detect and mitigate biases in deep models that arise from their tendency to rely on spurious correlations in the data. The authors use an interpretable model to identify neural features relevant to a class and then select the spurious ones based on limited human supervision. Ranking data by the activation of these spurious features yields many benefits, like revealing less biased subsets, quantifying model bias via "spurious gaps" in accuracy between high and low spuriosity data, and finetuning models on less spurious data to obtain robust performance. Analyzing spurious gaps for 89 models reveals that all underperform on less spurious data, indicating that bias stems more from what data models are trained on rather than how they are trained, and that spuriosity rankings provide an efficient and interpretable method to complement model-centric bias mitigation approaches.
Strengths: 1. This paper works on a timely and important problem. The paper is overall well written (except its dependence on a related work [42]) and I enjoy reading it.
2. It provides a scalable method to discover and rank data by how much spurious features they contain, i.e., spuriosity, which can help detect and mitigate biases in models.
3. This paper proposes to finetune models on low spuriosity data to improve performance on less biased instances while maintaining overall accuracy, resulting in more robust and more stable models.
4. Experiments results are insightful. The analysis of spurious gaps for many models indicates that biases stem more from the data than how models are trained, suggesting that data-centric approaches like this paper's can complement standard model-centric bias mitigation techniques.
Weaknesses: 1. One major concern is that this work is heavily based on a related work [42], which seems like from the same team of this submission. This makes the writing of this paper less friendly for readers not familiar with [42]. The authors are encouraged to revise the writing to make it more self-contained.
2. In line 176-177, the authors claim that “we are the first to uncover this racial bias in the Celeb-A benchmark”. Actually, this racial bias issue in Celeb-A has been well studied in various literature.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes, the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and comments. Also we appreciate the reviewer for saying the problem we tackle is ‘timely and important’, our ‘results are insightful’, and ‘the paper is well written’. We are very happy to hear you ‘enjoyed reading it’.
**Clarity**
We thank the reviewer for suggesting how to further improve our presentation, specifically w.r.t. explaining the mentioned prior work, from which we utilize their spurious feature discovery method. We note that we made multiple intentional efforts to make our work self-contained; we draw attention to these now. In the main text, we explain the method from prior work in several spots, each time providing more details: first at a high level in the introduction (L37-41), then in more depth in the related literature (L113-118), then in extensive detail in Section 3.1 (L132-148), where we also later explain our novel contributions atop the prior work (L155-161). Second, we design multiple visualizations to explain the framework, including images demonstrating specific methodologies from [42] (figures 2 and 3). Lastly, we note that we discuss the mentioned prior work in great detail in Appendix F, which includes a subsection (F.1) intended precisely to review the prior work so as to keep our paper self-contained.
To further aide the extensive explanations we detail above, we will add the following to the revised draft:
More explicit mentions to Appendix F to guide readers who desire more details.
An additional appendix section that serves as a taxonomy for terms like ‘robust neural feature’, ‘feature attack’, ‘neural activation map’, etc.
We hope these amendments will improve the presentation of the paper in the eyes of the reviewer, and if they do, we humbly ask that the reviewer increase their score for ‘presentation’.
Lastly, we highlight that while we utilize a method from the prior work, we make several novel and impactful contributions atop it, detailed in L155-161 and at length in Appendix F.3.
**Racial Bias in Celeb-A Hair Classification**
We are unaware of any mentions of racial bias in the Celeb-A hair classification task in prior work. We conducted an additional review of literature, and were unable to find such work. While [1] mentions that Celeb-A likely has a bias toward images of white people, we note that this bias is different than the one we claim to discover. We refer to how “a model may spuriously associate brown skin with the brown hair class, 176 likely leading to failures for blond-haired brown-skinned people” (L175-176), and only discuss Celeb-A in the context of the hair classification task, a known benchmark in spurious correlation literature [2]. If the reviewer has a different citation in mind, we would be more than happy to include the citation and walk back the claim. Furthermore, we will change ‘Celeb-A benchmark’ to ‘Celeb-A hair classification benchmark’ to make this even clearer.
[1] FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation, Karkainen et al, WACV 2021
[2] WILDS: A Benchmark of In-the-wild Distribution Shifts, Koh et al, ICML 2021 | Summary: This paper propsed a framework to measure model biases by ranking images within their class based on the strength of spurious cues and evaluate the gap in accuracy on the highest and lowest ranked images.
The analysis is comprehensive for a very large number of models.
Strengths: This paper propsed a simple method to measure model biases by ranking images within their class based on the strength of spurious cues.
The proposed method is simple but novel and unprecedented and seems reasonable.
In particular, the most strong point is that the analysis is comprehensive for a very large number of models.
Weaknesses: It seems more compatible with a computer vision conference than a machine learning conference.
The definition of spuriosity in Sec 4.1 is a bit ad hoc.
The ending without discussion or conclusion is a bit of a dead end in terms of the paper structure.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The definition of spuriosity in Sec 4.1 appears to implicitly assume the normality with respect to r.
For example, if the distribution of r is long-tailed, the variance would be too large and this definition is undesirable.
This may not be a problem if the features are after batch normalization, but for some models, this calculation may b undesirable, and would that affect the measurements?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The validity of the interpretation method for deep learning models itself is directly related to the validity of this method.
The significance and meaningfulness of the proposed method is entirely dependent on the interpretation model.
Because several papers have shown that explanations using heat maps are sometimes fragile, the usefulness of this technique is influenced by the goodness of other techniques.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind words. We now address each concern.
**On the modality-agnostic potential of Spuriosity Rankings**. While we demonstrate our method on a fundamental task (classification) in a fundamental modality (vision), we believe our framework and the lessons obtained by them are relevant for all in the ML community.
We demonstrate that the tendency of ML models to specialize the function of deep neural nodes can be leveraged to scalably sort data, specifically by finding such nodes that focus on spurious features. We also show the power of sorting data, to various ends, including interpreting spurious features, measuring bias caused by them, and fixing those same biases. We hope our work will inspire people from all subfields to consider sorting their data, namely by re-using existing models.
Our large-scale empirical study also sheds important insight to the spurious correlation problem, which is pervasive in all ML subfields. Namely, we learn that since diverse models seem to inherit the same biases, data-centric approaches are needed. Further, we detail the crucial class-dependence in defining and handling spurious correlations. Since most existing research around spurious correlations (across fields) is model/algorithm-centric and class-agnostic, we hope our findings will be novel and insightful to a wide audience.
While implementation details (i.e. how to interpret and utilize spurious concept detectors inside existing models) will vary, we believe that Spuriosity Rankings at its core is modality-agnostic, and thus potentially impactful to anyone studying spurious correlations.
**Conclusion**. We will add a conclusion in the updated draft (where we will have an extra page) so to tie together the whole work, with emphasis on higher level takeaways that apply to wider audiences, like the ones mentioned above.
**Spuriosity approximation in Sec 4.1**. We note that we offer just one way to approximate spuriosity, which is in many ways qualitative. We provide more detailed explanations of the notion of spuriosity and its implications in the introduction (L30-37). We approximate spuriosity quantitatively by averaging normalized activations of robust neural features annotated as detecting spurious features. We normalize these activations so that one neural feature does not dominate others in the case that feature activation distributions vary significantly (i.e. the long-tailed case the reviewer describes). Moreover, since we ultimately wish to rank images, we aim to average the ‘percentile’ an image falls w.r.t. activation on any of the spurious neural features, which is proxied by the normalized activation we employ. At the suggestion of the reviewer, we inspect activations of neural features in a diverse model set, finding that they all generally follow a normal distribution on their right tail. Further, within each model, activation variances are similar across neural features. Thus, while we thank the reviewer for drawing attention to a potential shortcoming, we feel comfortable in the reliability of our method. Nonetheless, spuriosity can be approximated in different ways, and we are eager to see how others in the community will do so in future work.
**Quality of employed interpretability methods**. We employ an interpretability method with significant precedent [42, 40 in original references], and further conduct extensive human validation of our interpretations (detailed in Appendix F). We agree with the reviewer that the success of our approach relates to the quality of interpretability methods employed, and we argue that this is a strength, as our method will continue to improve as models become more interpretable, which due to increased regulation, we believe is a certainty in the future. | Summary: The paper proposes a method to rank images in a given dataset in order of their "spuriosity". The proposal uses methodology from previous work [42] which examines most-active neural features of a trained model at per-class level, and hand-labels some sample images w.r.t. whether those features are “core” or “spurious”.
The authors 1) scale out the labeling to imagenet-1k, 2) use these sample labels to rank instances according to "spuriosity", 3) present a range of qualitative and quantitative results arguing that they can, e.g., reveal minority subpopulations, measure model bias & mitigate it, enable study of model stability by distributional perturbations, identify noisy labels, etc.
Strengths: 1. The authors give a data centric view to the problem of spurious correlation – i.e., instance rankings instead of feature / attribute labels or descriptions. This paper extends the study of [42] for all 1000 classes of imagenet.
2. The authors present some useful analyses e.g., showing ubiquity and correlation of bias across models and training methods, demonstrating “auto-”discovery of novel dimensions of bias using the new supervision, and the class-specificity of certain biased features. In particular, it makes sense, as the authors argue, to consider the “spuriosity” of a feature at class granularity rather than dataset granularity.
3. The proposal of data rankings opens up the possibility of trying out many mitigation methods and metrics to evaluate bias. The authors demonstrate one simple mechanism, and also suggest that their rankings can help flag mislabelled examples.
Weaknesses: 1. There is no quantified measure to evaluate the quality of rankings proposed. Is it possible to look at synthetic or small data to get a quantitative sense of ranking quality?
2. The bias mitigation approach is not compared with the very wide existing literature on bias / spurious feature mitigation – this limits a thorough assessment of the value of the proposed rankings in mitigation.
3. The claim about identifying incorrect labels is largely qualitative – some quantification would strengthen the case.
4. The process of identifying spurious correlations requires human supervision–this can limit the applicability of the approach, both in terms of only considering or covering some aspects of bias, and in terms of labeling costs which are higher than typical category labeling.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Overall, if many of the qualitative / exemplar applications could be quantified with more rigorous comparisons, this would strengthen the paper. E.g., recovery of “known” rankings, human-rating or other quantification of ranking quality, comparison against at least some simple bias mitigation methods, quantitative success at identifying label noise, etc.
2. Are the rankings / human labeling data to be released publicly? Are the authors claiming a “dataset contribution” as part of their submission?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The process of identifying spurious correlations depends on a) neural activations of trained models, and b) human supervision–this can limit the applicability of the approach, both in terms of only considering or covering some aspects of bias, and in terms of labeling costs which are higher than typical category labeling (although, of course, the entire dataset does not need to be labeled).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to truly engage with our paper and provide insightful comments. We now address main concerns.
**Validating Rankings**. Given the complexity of sorting thousands of images, evaluating the quality rankings is non-trivial, especially since a single ground-truth ranking often does not exist. Nonetheless, we validate several key properties of our rankings. First, we claim our rankings organize data along interpretable notions. Prior work shows that robust neural features are interpretable [42], and we similarly confirm this for all robust neural features used in our study (Appendix F.1.1). Next, we claim that our rankings identify potential biases caused by spurious correlations. In a large-scale evaluation of 89 models, we find that all models underperform on low spuriosity images, thus validating that our rankings capture biases. Further, we claim that high spuriosity images contain the spurious concept of interest. We validate this with a human study, finding that images in the top 20th percentile of spuriosity contain the intended spurious cues in nearly 90% of cases (see Appendix F.1.2). To provide further validation, we conduct another validation in our rebuttal. To confirm that the low spuriosity images do not contain the relevant spurious cues, we inspect the ten lowest spuriosity images for 20 randomly selected classes. We find that the relevant spurious cues are absent in 97% of inspected training images and 93% of inspected validation images. See Appendix C for a visualization of 5 randomly selected classes.
While the specific rankings obtained from our proposed framework do not have a straightforward evaluation procedure, this does not take away from the value of the framework as a whole. We believe the core idea of scalably sorting data along interpretable directions by leveraging existing models is novel, significant, and effective, as demonstrated by our experiments. We thank the reviewer for recognizing that our proposal ‘opens up the possibility of … many mitigation methods and metrics.’
A Potential Automated Quantitative Scheme to Validate Rankings
Despite the challenging nature of validating rankings, we believe an automated method can be constructed. Namely, one could utilize an open-vocabulary object detector or image tagger to annotate the objects / concepts that are present in highest and lowest spuriosity images (based on some proposed ranking we wish to evaluate). Then, one could assess ranking quality as the difference in presence of some detected spurious tag between the most and least spuriosity images. We find this an interesting avenue of future research, but opted for human validation in this paper, so as to avoid inheriting biases from the auxiliary model used in validation. We agree with the reviewer that such an automatic quantitative evaluation method would be of value, and hope to pursue this in follow up work.
**Comparing to existing bias mitigation techniques**. We prioritized conveying our approach over comparing to other methods, many of which differ in fundamental ways (e.g. data centric vs agnostic) from our idea. Nonetheless, we agree with the reviewer that more comparison would be insightful. We'll include more discussion of other methods in the revised draft.
Further, we perform a new experiment in the rebuttal period, in which we finetune model classification heads on misclassified samples (a common approach). We find that finetuning on misclassified training samples does not close spurious gaps. Also, tuning on errors leads to a larger reduction in validation accuracy (see figure in attached pdf). We conjecture this occurs because errors can be caused by many things aside from the absence of spurious cues, such as label noise. Thus, tuning on misclassified samples could lead to overfitting on unreliable data. In contrast, low spuriosity images are designed to only differ from typical samples in that they lack particular cues that have already been deemed spurious by a human. Arguably, low spuriosity images can at times offer more reliable learning signal, as they do not contain distracting shortcuts (e.g. the low spuriosity lighters in figure 1 are far easier to see than the high spuriosity ones).
**Validating label-noise flagging**. We thank the reviewer for reading our paper to the end! In section 5, we present qualitative evidence that spuriosity rankings can aid in flagging mislabeled samples, as high spuriosity images with severe negative gaps appear to be mislabeled. To strengthen this claim, we conduct a new experiment leveraging ImageNet ReaL labels [1], which indicate for each ImageNet validation image whether objects from multiple classes are present. ReaL labels show 14% of validation images contain multiple objects. We find that for classes with the 5 most negative spurious gaps (averaged over our model suite), high spuriosity (i.e. ranked in top 10% of spuriosity) validation images contain multiple objects in **80%** of cases. In contrast, low (bottom 10%) spuriosity images from the same classes contain multiple objects in only 8% of cases, indicating that the label noise is likely a result of the hypothesized spurious feature collision. Similarly, for classes who’s spurious gap is less than -20%, ReaL labels reveal 42% of the high spuriosity images to contain multiple objects, many times higher than the average rate of label noise. We'll include discussion of this quantitative validation in the updated draft and thank the reviewer for the suggestion.
**Release**. We cannot recall if a dataset contribution was claimed, though regardless, we intend on taking every step to keep our work reproducible and accessible to future researchers. We have already built a web UI for easy viewing and access of our annotations.
**Human cost**. See global rebuttal for ideas on how Spuriosity Rankings can be fully automated.
[1] Are we done with ImageNet? Beyer et al, 2020
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: Thank you for your detailed response.
I remain convinced that the work is broadly along directions useful to image understanding and machine learning in general.
I am comfortable with my assessment / rating of "weak accept". | Summary: - This work proposed Spuriosity, a quantity for determining the spuriosity ranking of data. The framework builds upon [42], which identifies spurious and core neural features by analyzing the neural activation map of an adversarially trained model. Spuriosity is defined based on these spurious neural features. The author also proposed the concept of ‘spurious gap,’ which measures the accuracy drop between the top-k highest and lowest validation images. By re-training the classifier head from a subsampled dataset using spuriosity, the authors demonstrate a reduction of around 10-20% in the spuriosity gap at the cost of 1-3% reduction in validation accuracy.
- While I acknowledge the challenging nature of the addressed problem and the novelty of this work, I believe that the experiment in this paper primarily focuses on the use case of the proposed quantities rather than proving comprehensive validation of them.
Strengths: - This paper addresses the challenging problem: figuring out the spuriosity of large-scale datasets and investigates its impact on model training. I highly appreciate these contributions, as the field often focuses on improving the performance of simplistic benchmark datasets such as Waterbirds.
- The paper introduced a novel approach that leverages the neural features to distinguish minority samples, which is algorithmically distinct from the previous works such as [1, 2, 3], which rely on classification results of auxiliary models.
- Their method can be effectively combined with balanced re-training methods, such as DFR.
[1] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." International Conference on Machine Learning. PMLR, 2021.
[2] Kim, Nayeong, et al. "Learning debiased classifier with biased committee." Advances in Neural Information Processing Systems 35 (2022): 18403-18415.
[3] Hendrycks, Dan, et al. "Natural adversarial examples." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Weaknesses: - Regarding the Spuriosity ranking, I have concerns about the lack of quantitative metrics and comparison with baselines, making it hard to estimate how reliable the suggested quantity is and how much better compared to the other methods. I suggest comparing the performance of detecting minority samples with [1, 2] on the group annotated benchmark datasets such as WaterBrids and CelebA.
- Another concern is the lack of a baseline and metrics of Spurious Gap. Say, someone proposed a Spurious Gap ver.2, then how to know which one is better?
- The proposed method for detecting spurious correlation relies on human effort.
- Minor correction for misinterpreted terminologies
- Interpretable model: Consider using an ‘adversarially trained model,’ as normally trained models are also interpretable.
- [L239] How it is trained: This expression is too broad. It seems to encompass algorithmic approaches like GroupDRO, etc.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - [Figure 8] How about comparing the reduction in the spurious gap with a subsampled dataset constructed using misclassified data from an auxiliary model?
- Is there any idea for establishing the quantitative metrics and baselines of Spuriosity ranking and Spurious gap?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - Their method heavily relies on interpretation methods and human judgment, which is costly.
- The utilized neural activation map cannot distinguish certain features that are not discernible based on spatial information alone, such as the color and texture of the objects.
- Most of the framework originates from previous work [42], which diminishes the contribution of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for taking the time to read our paper and provide insightful comments. We address them below.
**Comparison to existing baselines**
We thank the reviewer for noting that our method tackles the “challenging problem [of] figuring out spuriosity on large scale datasets”, in contrast to most work which “focuses … on simplistic benchmark datasets”. We also thank the reviewer for their suggested follow up experiment, in which we finetune model classification heads on misclassified samples, a la JTT or LfF. To provide additional quantitative demonstration of the advantages of our approach, we perform this experiment, and find that finetuning on misclassified training samples does not close spurious gaps. Also, we find that tuning on misclassified samples leads to a larger reduction in validation accuracy (see table attached to global rebuttal). We believe the reason for this is that misclassifications can be caused by many things aside from the absence of spurious cues, including label noise. Thus, tuning on misclassified samples could lead to overfitting on unreliable data. In contrast, low spuriosity images are designed to only differ from typical samples in that they lack particular cues that have already been deemed spurious by a human. Arguably, low spuriosity images can at times offer a more reliable learning signal, as they do not contain distracting shortcuts (e.g. the low spuriosity lighters in figure 1 are far easier to see than the high spuriosity ones).
**On evaluating ranking quality, with a potential automated solution**
We thank the reviewer for their important question. We note that given the complexity of sorting a large number of images, validating/evaluating the quality rankings is non-trivial, especially since a single ground-truth ranking often does not exist. In our work, we primarily use human studies to validate several key properties of our rankings; we refer the reviewer to our rebuttal to Reviewer ck2C for more details.
While we believe our validation is convincing, we agree that in the future, an automated quantitative procedure would be valuable to the community. In response to the reviewer’s question, we hypothesize the following scheme: one could utilize an open-vocabulary object detector or image tagger to annotate the objects / concepts that are present in highest and lowest spuriosity images (based on some proposed ranking we wish to evaluate). Then, one could assess ranking quality as the difference in presence of some detected spurious tag between the most and least spuriosity images. Options for quantitative metrics could be the difference in (a) number of times a spurious concept occurs in the the top vs. bottom spuriosity images, (b) similarity of top vs. bottom spuriosity images to the text embedding of a spurious concept (embedded using a vision-language model like CLIP). We find this an interesting avenue of future research, but opted for human validation in this paper, so as to avoid inheriting biases from the auxiliary model used in validation. We agree with the reviewer that such an automatic quantitative evaluation method would be of value, and hope to pursue this in follow up work.
**Minor corrections**. We thank the reviewer for the detailed comments. We will use softer language in L239. We note that we intentionally use the term ‘interpretable’ model, as our overall framework can certainly extend to non-adversarially trained models. So long as one can interpret internal nodes or, more broadly, directions in the representation space of a deep model, one could use the model to extract spurious concept detectors and proxy spuriosity quantitatively.
**Use of human judgment**. We refer to the global rebuttal, where we detail how human involvement can be removed if desirable.
**Beyond spatial interpretation of neural feature focus**. While NAMs only highlight a particular image region, feature attacks change the image to amplify the true concept causing activation on the neural feature of interest. For example, in Figure 3 (right), while the NAM may leave question as to whether the butterfly or flower causes activation, the feature attack actually replaces the butterfly with more flowers, making the focus of the feature crystal clear.
**Contributions over prior work**. We note that the key idea of our work (i.e. sorting data at scale towards interpreting, measuring, and mitigating bias) is novel relative to the last work. While we utilize the spurious feature discovery method of [42], we believe it is just an implementation for one step of the pipeline (extracting concept detectors from trained networks). We refer to Appendix F.3 for extensive discussion of the novel contributions of our work compared to [42].
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I appreciate your detailed response.
I believe the contributions of this paper are good enough to be accepted. However, there are remaining concerns.
- As y13p mentioned, revising the paper to be self-contained would be important.
- I agree that the result presented in Appendix F.1.2 is convincing for validation. Nonetheless, I believe there are still remaining tasks required to fully validate the Spuriosity ranking. For example, within this paper (https://arxiv.org/abs/2206.10843), Figure 4 presents the ability to rank the spuriosity of the training dataset. Also, one could establish the naive ranking based on learning speed (sorting samples based on training loss). Although these methods might be affected by label noise, employing them as baselines and checking their performance through the human study in Appendix F.1.2 could benefit this community.
Many of my original concerns are alleviated now, so I increased my review score. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their time and insightful comments. A couple reviewers wonder if the involvement of a human in our framework is a limitation. We provide extensive discussion below, where we argue that having a human in the loop is a strength, though it is not necessary, as automated alternatives are completely feasible within our proposed Spuriosity Rankings framework.
We would also like to draw attention to the four small additional small experiments/validations conducted during the rebuttal period.
i. We add a new baseline inspired by a common family of approaches in bias mitigation, where a model upweights error samples. We find that this tuning does not close spurious gaps and reduces validation accuracy more than our proposed method.
ii. We perform a small scale human study to validate that low spuriosity images do not contain the relevant spurious cues detected by robust neural features annotated as spurious.
iii. We leverage ImageNet ReaL labels to quantitatively verify that the rate of label noise is significantly higher amongst high spuriosity images from classes with strong negative spurious gaps, as mentioned in Section 5.
iv. We demonstrate strong preliminary signal that an automated version of Spuriosity Rankings leveraging recent VLMs and LLMs can very efficiently extract interesting model biases (details below).
We are grateful that the reviewers see the merits of our simple yet powerful idea to sort existing data using existing models so that we can better utilize them, towards more reliable/less biased ML models. We hope our rebuttals have addressed most of the reviewer comments. Thank you.
**On human involvement**
We believe that human involvement is a strength of our framework, as it increases transparency. Namely, the human in the loop is given a concise inside look on the cues a model trained on the given data is likely to rely upon. Moreover, the human is given agency to decide which cues model performance should be invariant to. Next, we note that the level of human involvement is relatively low. Given the complexity of the task of sorting thousands of images within a class, we believe that only requiring a human to inspect a handful of images is quite efficient. With an appropriate UI, we can confirm that this takes no longer than about a 30-40 seconds per class.
However, in cases where minimizing cost is preferred, Spuriosity Rankings can absolutely be automated. One way to do so is to automate the annotation of neural features as core or spurious. Specifically, one could automatically segment the class object with open-vocabulary segmentation models [1], and then compute the amount of saliency placed on the object by the neural activation map. If most of the salient pixels for a neural feature lie outside of the image region containing the class object, one could automatically flag such a feature as spurious. Indeed, a similar idea is explored with promising early signs in this recent workshop paper [2].
Another way to automate Spuriosity Rankings is to leverage vision-language models (VLMs) like CLIP [3]. With VLMs, we can compute the similarity of an image to any concept, encoded directly from text. After sorting images by similarity to spurious concepts, we can inspect the Spurious gap to measure bias, and train on low spuriosity images to mitigate bias. Further, one can utilize an LLM to automatically generate spurious concepts that a model may rely upon. To demonstrate the feasibility of this extension, we perform it to discover spurious correlations for CLIP. Specifically, we ask Vicuna-13b ‘List 16 different locations in which a {classname} may appear in an image’ and 'In an image of a {classname}, list 16 other objects that may also appear' for all classes in ImageNet, so to extract common backgrounds and co-occurring objects (i.e. potential relevant spurious cues). We then rank images within their class by spuriosity (measure by similarity of image embedding to the text embedding of ‘a photo of a {LLM inferred potential spurious concept}’), compute spuriosity gaps, and inspect class-concept pairs with the highest gaps. Indeed, the framework detects numerous interesting biases, such as ‘clay’ for the class potter’s wheel and ‘credit cards’ for the class wallet. Moreover, inspection of the images with most and least spuriosity confirm that Spuriosity Rankings computed in this manner also can extract from one’s own data natural examples and counterfactuals of the spurious correlation (e.g. potter's wheel images with and without clay). See the attached pdf to inspect these images.
Note that the underlying mechanism of Spuriosity Rankings is preserved: we utilize the representation space of an interpretable model to quantify the presence of relevant cues, and rank images by spuriosity (proxied by similarity/activation of concept directions in representation space) to enable interpretation of spurious cues in context and measure model bias caused by them. While we believe the human in the loop of our presented framework offers some advantages, we appreciate the reviewers' questions regarding automated alternatives, are confident that they exist, and will include some discussion of them in the revised draft.
[1] Segment Anything, Kirillov et al, 2023
[2] Identifying and Disentangling Spurious Features in Pretrained Image Representations, Darbinyan et al, 2023
[3] Learning Transferable Vision Models from Natural Language Supervision, Radford et al, 2020
Pdf: /pdf/17a6971248d70d80fe2d0ddbbbf76ab2d2912292.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
When are ensembles really effective? | Accept (poster) | Summary: The authors offer a novel theoretical analysis of the majority vote classifier. In the process they define the "ensemble improvement rate" and give lower and upper bounds on it in terms of another entity they call the "disagreement-error ratio", under a novel assumption (competence) that they show empirically is mild. They also show how other works can be phrased in terms of lower/upper bounds on the ensemble improvement rate.
Strengths: As they say in the paper, without any assumptions all that can be said is that EIR (the ensemble improvement rate) is >=-1. In MLIS20 they make additional assumptions and yield better results, but the current work improves upon the work in MLIS20 by a factor of 2.
The main novelty and strength of the paper is in the definition of "competence", a seemingly mild assumption that allows to get much better theoretical results. (Theorem 1: competent ensembles never heart, i.e. EIR>=0; and Theorem 2 giving explicit bounds of EIR under the assumption of competence.)
Another strength of the paper is that they are not satisfied that "competence" feels like a mild assumption, they show empirical proof that this is often the case.
The mathematical definitions are elegant, and the paper is easy to read.
Weaknesses: It is not clear how much the analysis can translate to regression tasks, perhaps some words to address this would be helpful. It may also be nice to say some words about whether the bounds become trivial as K goes to infinity.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: This is a bit nit-picky, but I find the $W_{\rho}$ notation confusing: $E_{h\sim\rho}(1_{(h(X)\neq Y)})$ should really be $E_{h\sim \rho}(1_{(h(X)\neq Y)}|X, Y)$. On the other hand, others may find the notation "$E_{h\sim\rho}(1_{(h(X)\neq Y)}|X, Y)$" confusing, despite being exactly correct. (Conditional expectation on random variables yields a random variable.)
Perhaps the solution is to keep $W_{\rho}$ as $W_{\rho}(X,Y)$ (meaning not to neglect $X$ and $Y$ in the notation)? Or maybe explain a little more that $P(W_{\rho} \in [t, 1/2))$ is over the distribution of $(X,Y)$?
I am also a little confused about the relationship between competence and a similar notion where you change the order of expectation: replace $P_{(X,Y)\sim D}(E_{h\sim \rho}(1_{(h(X)\neq Y)}|X, Y) \text{is in some interval})$ with $P_{h\sim \rho}(E_{(X,Y)\sim D}(1_{(h(X)\neq Y)}|h) \text{is in some interval})$. Is this more or less intuitive? Can anything be said about ensembles with this alternate definition of competency?
How does this work compare with the seminal "Super Learner" paper (Van der Laan, M. J., Polley, E. C., and Hubbard, A. E.) that is commonly cited in theoretical ensemble research? They compare ensemble performance to the best performing base learner, but they don't give guarantees for when it is better. Would they be able to benefit from a regression-equivalent of "competence"?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: None that I can see.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review and very insightful questions. We completely agree on the notational points, and will improve this in an updated draft of the paper.
As we mention in our response to Reviewer 3, the analysis is actually very simple in the regression case. In this case, via a bias-variance decomposition, we obtain an _identity_ $EIR = \frac{E_f[L(f)] - L(\bar{f})}{E_f[L(f)]} = \frac{E_X[\text{Var}_{f}[f(X)]]}{E_f[L(f)]} \geq 0$. Much of the novelty of this work is in addressing the much more challenging case of classification with the 0-1 loss. We will provide some additional clarification on this point in an updated draft of the paper.
The case $K\to \infty$ is actually very interesting and highlights a significant strength of our bound versus prior work. Note that in the limit of large $K$, Theorem 2 reduces to $\text{EIR} \geq 2* \text{DER} - 3$. Far from trivial, this is in fact the same bound one would obtain in the _binary_ case according to the result of Masegosa et al. 2020. Prior to our work, this bound was the tightest known for any $K$. So even under the pessimistic scenario $K \to \infty$, our results (and the competence assumption) constitute a significant improvement over existing second-order bounds.
With respect to the alternative form of the competence assumption the reviewer proposes, we could see a case for this new notion being slightly more intuitive. In particular, depending on one's perspective, it may be more intuitive to reason about a probability over models $h\sim \rho$ versus a probability over data. However, it's unclear whether a similar analysis could be conducted under this assumption -- certainly the techniques we employ would have to be suitably adjusted.
The "Super Learner" paper you mention proposes and studies a specific method for ensembling, whereas our paper derived results that apply to (almost) any arbitrary ensembling technique. Moreover, they focus on the regression setting, whereas our focus is on classification.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the interesting comments. I retain my score, and recommend acceptance.
About "Super Learner", I think the authors are mistaken: stacked generalizations are not a "specific method for ensembling", but rather a vast generalization of almost all ensembling techniques used in practice. One would be hard pressed to find an ensembling method that does not fall into that category. (Many classic ensembling methods such as simple mean/median etc. are simply the case where the stacker is constant.) I'd still urge the authors to make the connection and explain the differences in the final draft.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your positive feedback. It seems we may have misinterpreted the results of the “Super Learner” paper; we will do a more thorough review of this work and include a discussion in an updated draft of the paper. | Summary: The paper "When are ensembles really effective?" discusses the question asked in the title: When are ensembles really effective? To do so, the authors introduce novel theoretical measures called the ensemble improvement rate and the disagreement-error ratio. The ensemble improvement rate measures the relative rate an ensemble improves over a single learner, whereas the disagreement-error ratio measures the expected disagreement between members of the ensemble scaled by the expected performance of each model. The authors relate both terms to each other through a series of inequalities, i.e. provide an upper and lower bound for the ensemble improvement rate wrt. to the disagreement-error ratio. Moreover, the authors introduce the concept of competent ensembles which more closely resemble ensembles we are seeing in practice: While we might construct ensembles with arbitrarily bad members, we usually see ensembles that, on average, have well-performing members. Last, the authors conclude the paper with an empirical investigation that shows that a) ensembles are indeed competent in practice and b) that the disagreement-error ratio is valuable in predicting if an ensemble might improve performance compared to a single model.
Strengths: - The paper is well-written and easy to follow. The authors introduce remarks where necessary and re-visit existing results from the literature if it adds valuable information to the paper.
- The authors introduce new theoretical concepts, argue in favor of them, and show their practical value in experiments. In addition, the authors also validate any assumptions they are making in their empirical investigation. Hence, the authors offer a very complete overview of their ideas, studying them from a theoretical and a practical point of view. This should be the gold standard for all papers introducing novel concepts.
- The introduced ensemble improvement rate and disagreement-error ratio seem to work well in practice and hence are valuable. Second, I think the idea of competent ensembles can be potentially impactful for future studies about ensembles, as it brings the theory about ensembles closer to what we see in practice
Weaknesses: - In some form, the paper presents many things we already know in a new form. For example, the disagreement-error ratio can be seen as some form of diversity that has been studied for a long time in the literature. Similarly, the concept of weak learnability and competent ensembles seem to be closely related to each other (see my question below). While the authors did a decent job relating their work to existing work, a deeper discussion here would be interstring
- There is a minor typo in the paper: Section 3 uses "K" as the number of classes, whereas in Section 4 "C" is used for the number of classes
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - In the introduction, you present the following scenario: "In particular, consider the following practical scenario: a practitioner has trained a single (perhaps large and expensive) model, and would like to know whether they can expect significant gains in performance from training additional models and ensembling them" -- How can you answer this question with EIR and DER? As far as I can see, EIR and DER require already trained ensembles and explain their performance and not predict the performance of a yet untrained ensemble? In this sense, it is similar to the bias-variance decomposition that explains the performance of a model/ensemble post-hoc (i.e. after training) and, therefore might be used as a theoretical guidance tool but not necessarily as a prediction of what model/ensemble performs better or worse in a given scenario. Did I miss something here?
- Did you explore the relationship between the disagreement-error ratio and the bias-variance decomposition, and if so, what is it? I assume that given a suitable disagreement rate D, one could come up with a term that is close to the variance (i.e., diversity), but the normalization via the expected error rate this novel here.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors empirically validate their assumption of competent ensembles that might limit the applicability of the presented theory. Moreover, they point out the unique case of Random Forests that arguably justifies a study in its own right. I agree with the authors that there do not seem to be any negative societal impacts from the theory presented in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong review of our work, and very helpful feedback.
Thank you for noticing the K/C typo -- we will correct this. We also agree that a better discussion of the relation of our work to weak learnability would be helpful; we will do our best to incorporate this in an updated draft of the paper.
For the first question, what you say is indeed true with respect to our theory: to compute any of the terms in our bounds (e.g. the DER), one needs to already have multiple trained models. However, one concrete piece of guidance we can give would be based on our empirical results regarding the behavior of the DER in and out of the interpolating regime. In particular, a simple rule-of-thumb would be that if a *single* trained model is _not_ able to easily interpolate the training data, then ensembling is likely to help signifcantly (though what "significantly" means will of course depend on the particular application). In the future, it would be interesting to see if the $DER </> 1$ condition could be used to obtain different heuristics in other settings.
As for the second question, our results are indeed very closely related to a bias-variance decomposition. In the regression case, when one computes a bias-variance decomposition over randomness in the sampling of models, a completely analogous result holds as an identity:
$E_f[L(f)] = L(\bar{f}) + E_X[\text{Var}_{f}[f(X)]]$, implying that
$$
EIR = \frac{E_f[L(f)] - L(\bar{f})}{E_f[L(f)]} = \frac{E_X[\text{Var}_{f}[f(X)]]}{E_f[L(f)]}
$$
Thus, without the normalization, the bound in Theorem 2 admits a similar interpretation. Similar expressions can also be derived for other loss functions, e.g. as in Ortega et al. 2022. We agree that this is an important comment, and will use additional space in an updated draft to include a discussion on this point.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications, which solidified my original scoring :-) | Summary: The paper asks the following question: when is ensembling beneficial? The benefit is measured by the ensemble improvement rate (EIR), defined as the difference of average and majority voting risk, divided by the average risk. It is shown that 'competent ensembles never hurt', in the sense that under mild assumptions, EIR \geq 0 (Th. 1). A more precise result (Th. 2) provides quantitative bounds relating EIR and another quantity called the disagreement-error ratio (DER). DER is simply the expected disagreement between our ensembles, divided by the expected loss. A consequence of this result is that improvement is moderate when disagreement is small, whenever it can be large whenever disagreement is large. The paper concludes with some empirical evaluation.
Strengths: The key question of the paper is of utmost importance to the machine learning community. The answers provided by the present work, although simple, are quite interesting. What is really promising is that competence can be evaluated in practice (Section 4.2), giving an actionable way to check whether it is profitable to do ensembling in a specific concrete case.
Weaknesses: The main weakness in my opinion is the restriction to the classification setting, and, further, to the use of the 0-1 loss.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do you suspect that Th. 2 is tight? Could examples similar to those developed for Th. 1 achieve tightness in the string of inequalities?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review and helpful feedback. The classification/0-1 loss setting represents the most significant challenge from the theoretical perspective, hence the focus on this case. For example, in regression with MSE loss, one has (via a simple bias-variance decomposition) the following identity:
$$
E_f[L(f)] = L(\bar{f}) + E_X[\text{Var}_{f}[f(X)]]
$$
From this, an analogous characterization of the EIR can be derived: $EIR = \frac{E_f[L(f)] - L(\bar{f})}{E_f[L(f)]} = \frac{E_X[\text{Var}_{f}[f(X)]]}{E_f[L(f)]} \geq 0$. Similarly, for other loss functions (like the cross entropy loss), the analysis is significantly easier, as one can typically exploit Jensen's inequality to relate the average error to the error of the ensemble model. Other works have addressed this case in detail, e.g. Abe et al. 2022 and Ortega et al. 2022. We will provide some additional clarification on this point in an updated draft of the paper.
Without further restrictions, we believe Theorem 2 should be tight, although we do not have a formal argument for this. This belief comes from numerical simulations involving perturbations of examples along similar lines to those given in Appendix D. Consider for each example $x$, exactly $(1-\epsilon)$ fraction of classifiers predict the correct label and the remaining $\epsilon$ fraction of classifiers predict a wrong label. In this case, $EIR=1$, and the lower bound of the Theorem 2 is $1-\epsilon\frac{4(K-1)}{K}$ which can be arbitrarily close to $1$. More precise conditions on the nature of the constants in the lower bound of Theorem 2 arise from considering classifiers with slightly differing individual probabilities.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: Thank you for the added clarification regarding the loss functions and the comment on the tightness of Th. 2. | Summary: Analysis the ensemble improvement rate vs. the disagreement-error ratio
Strengths: A solid analytical analysi.
Weaknesses: It is not clear for me what is the usefulness of obtained results.
The scope of the experiments is limited.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: What is the usefulness of this study?
How to apply them in practice?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing a review of our paper. As the title and results suggest, the usefulness of this study is in describing when we can expect ensembling methods to be effective. In particular, our empirical result suggests that ensemble improvement is much less pronounced for interpolating versus more traditional noninterpolating ensembles. If possible, it would be helpful if you could provide some additional feedback and/or detail regarding your concerns about our work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This study explores when ensembling leads to significant performance enhancements in classification tasks. Theoretical findings reveal that ensembling improves performance considerably when the disagreement rate among classifiers is large compared to the average error rate. The study also establishes improved upper and lower bounds on the average test error rate of the majority vote classifier under a condition called competence. In addition to the theoretical analysis, the researchers conduct empirical experiments in various scenarios to validate their theory. The study highlights a distinction in the behaviour of interpolating models and non-interpolating models, showing that ensembling provides more significant benefits in the latter case.
Strengths: - The work addresses a relevant and, still, open problem: when are ensembles really effective? The presented theoretical framework is specially designed to answer this question. Although it builds on previous theoretical analysis, it introduces a new concept, “competent ensembles”, which give rises to more insightful conclusions.
- The empirical evaluation of this work is quite strong. Besides validating the proposed theoretical framework, authors also provide insightful analysis about the effectiveness of ensembles. The analysis of what happens at the interpolation regime is of high importance for current practices of machine learning.
Weaknesses: - Assumption 1 is central in the theoretical analysis of this work. Although authors make an effort in describing assumption 1 as a condition guaranteeing that ensembles are better than individual classifiers, this assumption is very technical and it is not well described.
- Although this is not the main aim of this paper, the provided theoretical framework provides little to no guidance in how to build better ensembles.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you derive Thm 2 under the assumption that EIR>1? Because, in this case, your assumption should be EIR>1.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Authors do not fully discussed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive review of our work, and useful feedback. While the aim of this work is not necessarily to provide guidance on how to build better ensembles, we hypothesize that some of our insights might lead to the development of better methods in future work. In particular, our findings regarding EIR/DER in and out of the interpolating regime suggest that overparameterized deep ensembles require more diversity than current methods allow to be effective.
Regarding Assumption 1, there are a few comments that can be made to perhaps better elucidate its nature. For example, taking $t = 0$ in the statement of the assumption reveals the condition that, on average over the data, more classifiers in the ensemble are correct than not. What we actually require is stronger than this -- that, over the data, it is more likely that *slightly* more classifiers in the ensemble are correct than *slightly* less. This is along the lines of a stochastic dominance condition. However, we do observe that the competence condition holds widely in practice. We will add these comments in the revised version of the paper.
With regard to the reviewer's question about $\text{EIR} > 1$, some clarification would be helpful. By definition, $\text{EIR} \leq 1$, so perhaps the reviewer had in mind a different scenario?
---
Rebuttal Comment 1.1:
Comment: Dear authors,
In relation to my question. Sorry, it was a typo. My question is: Could you derive Thm 2 under the assumption that EIR>0? Because, in this case, your assumption should be EIR>0. I would like to see your response.
---
Reply to Comment 1.1.1:
Comment: Thank you for clarifying, this is a good question. It turns out that assuming $EIR > 0$ is _not_ sufficient to prove Theorem 2 (that is, competence is not necessary for $EIR \geq 0$). To see this, we can provide a numerical example for when $EIR > 0$ but the ensemble is not competent, and the lower bound in Theorem 2 does not hold. We summarize the setup of this example in the following table:
| Set $A \subset \mathcal{X}\times \mathcal{Y}$, $P_{X,Y}(A) = 0.6$ | Set $B\subset \mathcal{X}\times \mathcal{Y}$, $P_{X,Y}(B)=0.4$ |
|------------------------|---------------------|
| For all $(x,y)\in A$, $P_h(h(x) = y) = 0.7$ | For all $(x,y)\in B$, $P_h(h(x)=y) = 0.4$ |
where here we assume we are in the binary $K=2$ case. For this setup, one can compute $L(h_{MV}) = 0.4$ (note $h_{MV}(x) \neq y \Leftrightarrow (x,y)\in B$), $E[L(h)] = 0.42$, and $E[D(h,h')] = 0.444$. In this case, $EIR = 0.02/0.42 > 0$, but the lower bound is $DER - 1 = 0.444/0.42 - 1 = 0.024/0.42 > 0.02/0.42 = EIR$, and so the lower bound in Theorem 2 does not hold. | null | null | null | null | null | null |
Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning | Accept (poster) | Summary: This paper conducts a detailed study on what attributes of data augmention in Visual Reinforcement Learning are playing essetianal roles. With extensive experiments on DMC tasks, four main findings are given regarding the strength and diversity. Based on these findings, authors introduce two new data augmentation techniques: `Random PadResize` and `Cycling Augmentation` and achieves reasonble improvement over DrQ-v2.
Strengths: 1. The attributes of data augmentation are well categorized and systematially studied, supported by extensive experiments.
2. The newly proposed data augmentation mechanism is well motivated and simple enough for the Visual RL community to widely adopt.
Weaknesses: 1. **Lack of novelty.** The given properties of data augmentation are not too surprising and so is the proposed mechanism.
2. **Limited performance.** As shown in Figure 8 and Figure 9, the sample efficiency is very close to DrQ-v2 (almost the same). However, the main claim made by authors is the improved sample efficiency. Moreover, from the curve trend in Figure 8 and Figure 9, a bit more training would possibly lead to the same performance of all these compared methods.
3. **Lack of enough training.** The training steps across all environments are not enough to make the curves converge, thus it is unclear whether the newly proposed mechanism could lead to better convergence or merely give a mild improvement in limited steps.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See `weaknesses` for questions.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations have been mentioned by authors in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and insightful feedback. We will address each of your comments and concerns below and also in our revised manuscript.
----
**W1**: *Lack of novelty.*
**A**: We appreciate you raising the concern regarding novelty. While I understand your perspective that the proposed augmentation properties and mechanism may not seem overly surprising, we believe this work makes several novel contributions to the field of visual RL:
1. our in-depth investigation helps fill a gap in analysis of DA attributes for visual RL. We provide quantitative experiments to how hardness, strength diversity, spatial diversity, and type diversity impact the effectiveness of DA in sample-efficient visual RL. This rethinking part of this paper provide insights to guide the design of RL-tailored DA techniques.
2. The Rand PR individual augmentation incorporating controlled hardness and enhanced spatial diversity is an original contribution improving sample efficiency in visual RL.
3. The CycAug multi-type fusion scheme tailored for RL via periodic cycling of diverse augmentations is also novel, benefiting type diversity while maintaining data distribution consistency.
4. We achieve state-of-the-art sample efficiency on both DM Control and CARLA benchmarks, validating the efficacy of our proposed techniques.
---
**W2:** *Limited performance.*
**A**: In this paper, our objective is to leverage the potential of data augmentation (DA) to enhance the sample efficiency of visual reinforcement learning (RL). This refers to achieving the highest possible performance within a limited number of iterations in the environment. As demonstrated in our paper through evaluations conducted in DMC and CARLA environments, as well as the supplementary experiments in the table-top manipulation environments of Robosuite (as shown in `Figure 1 of the Response PDF`), CycAug consistently achieves significantly higher sample efficiency than DrQ-V2 after a constrained number of interaction steps. Furthermore, when the allowed interaction steps are increased, CycAug exhibits faster convergence and higher final performance. For example in CARLA tasks, CycAug outperforms DrQ-V2 by substantial margins $23.8\%$ in final performance and $43.7\%$ in low data regime.
A salient feature of the proposed Rand PR and CycAug techniques is their versatility as plug-and-play modules that can boost the effectiveness of existing algorithms. Our methods solely involve enhancing the data augmentation while keeping the base visual RL framework unchanged. As such, Rand PR and CycAug represent broadly applicable contribution modules, rather than complete stand-alone systems. The strengths of reduced hardness and controlled diversity in Rand PR, along with the stability benefits of cyclic augmentation in CycAug, can serve as universal plugins to augment prevailing RL methods. We empirically demonstrate their versatility by improving the state-of-the-art DrQ-v2. Furthermore, Rand PR and CycAug are compatible with and can potentially enhance myriad other cutting-edge visual RL algorithms. This underlines their value as versatile contribution techniques.
---
**W3:** *Lack of enough training.*
**A**: Thanks for your suggestion. In order to ensure the convergence of algorithm performance, we increased the allowed training steps to twice their original values in four representative DMC tasks. We illustrate the average episode return after $3\times 10^6$ frames in the below table. The complete training curves are depicted in `Figure 3 (Right) of the Response PDF`. CycAug achieves faster convergence than DrQ-V2 across all tasks and demonstrates higher final performance on Quadruped Run and Walker Run.
| Task | | DrQ-V2 | | CycAug |
|----------------|---|--------|---|--------|
| Quadruped Run | |$694.7\pm 134$| |$840.6\pm 57$|
| Walker Run | |$699.4\pm 44$ | |$772.6\pm 17$|
| Quadruped Walk | |$920.3\pm 13$ | |$937.1\pm 13$|
| Hopper Hop | |$358.7\pm 47$ | |$366.5\pm 65$|
---
Rebuttal Comment 1.1:
Title: Reply from Reviewer
Comment: Thank the authors for the new experiment results.
The curves of new tasks and new training steps further show that the performance of the proposed algorithm is limited, across domains and tasks (**Q2**). It is still unclear whether such improvement is universal across domains and tasks, while I appreciate the efforts made by authors to conduct experiments on Robosuite.
I would confirm my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's continued discussion and feedback. In the interest of brevity, we attempt to address your remaining concerns as succinctly as possible.
**More extensive experiments demonstrate consistent performance improvements of CycAug across domains and tasks.** Beyond the experimental results presented in the manuscript, we further validated the efficacy of CycAug on a variety of challenging environments and tasks. The consistent experimental results demonstrate that CycAug improves upon the performance of DrQ-V2 without incurring any additional cost.
1. **Robosuite**
The training curves of Robosuite presented in `Response PDF` are aggregated from two challenging manipulation tasks of Lift and TwoArmPegInHole. We report in the table below the episode returns attained after 1M frames across 5 random seeds.
| Task of Robosuite|| DrQ-V2|| CycAug |
|------|-------------|------|-|----------------|
| Lift ||$253.9\pm 125$||**$324.5\pm 120$**|
| TwoArmPegInHole||$295.2\pm 56$||**$350.4\pm 53$**|
2. **Humanoid Tasks of DMC**
We further evaluated the performance of CycAug on hard DMC tasks after 15M frames. Due to constraints on time and computational resources, we have currently only completed training over 3 random seeds, and the scores of DrQ-V2 are copied from the paper[1].
| Hard Task of DMC|| DrQ-V2|| CycAug |
|------|-------------|------|-|----------------|
| Humanoid Stand ||$167\pm 159$||**$376.8\pm 169$**|
| Humanoid Walk ||$243\pm 162$||**$402.6\pm 153$**|
3. **Habitat**
Habitat presents a challenging indoor visual navigation task. In the table below, we present a comparison of success rates between CycAug and DrQ-V2 after different training frames, across 5 seeds.
| Habitat|| DrQ-V2|| CycAug |
|------|-------------|-----|-|-----------------|
| Success Rate @ 200k Frames||$0.37\pm 0.11$||**$0.48\pm 0.13$**|
| Success Rate @ 300k Frames||$0.78\pm 0.09$||**$0.85\pm 0.08$**|
**As a multi-type DA fusion scheme, CycAug increases the type diversity of training data while barely elevating data hardness, thus its stable performance gains are foreseeable, as extensively evidenced in our experiments.**
[1] Cetin E, Ball P J, Roberts S, et al. Stabilizing off-policy deep reinforcement learning from pixels. ICML, 2022. | Summary: This paper explores the crucial attributes of domain adaptation (DA) in achieving sample-efficient visual reinforcement learning (RL) and emphasizes the specific requirements of DA for visual RL. Extensive experiments are conducted to investigate these attributes.
The paper introduces two practical guidelines that aim to maximize the potential of DA. The first guideline focuses on individual DA operations, while the second guideline explores multi-type DA fusion schemes. Based on these guidelines, the authors propose two improvement strategies, namely Rand PR and CycAug.
CycAug, which incorporates Rand PR as a key component, is shown to outperform existing methods in terms of sample efficiency. Through comprehensive benchmark tasks on DM Control and CARLA, CycAug demonstrates state-of-the-art performance in enhancing sample efficiency in visual RL.
Strengths: 1. The hardness of DA in RL has been well analyzed.
2. Rand Padresize: This paper introduces Rand Padresize as a novel augmentation method in reinforcement learning (RL). Unlike cropping, Rand Padresize retains all the information, which is highly advantageous. This unique approach addresses the challenge of hardness from augmentation in RL, making it a notable strength of the paper.
3. CycAug: The paper proposes CycAug, which employs a cycling method to overcome the potential disruption caused by excessively frequent variations. This demonstrates the authors' thoughtful consideration of the impact of augmentation strategies on the learning process. The inclusion of CycAug as a solution to mitigate disturbances is another strength of the paper.
Weaknesses: 1. Lack of novelty: The paper acknowledges the problem of hardness in RL as a well-known issue.
2. Only use two augmentation in CycAug.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: What is different in below (i), (iii), (iv) on the page 2.
(i) The training performance of visual RL is highly sensitive to the increase of DA’s hardness
(iii) Unlimited increases in strength diversity can harm visual RL performance.
(iv) Despite the increased type diversity, naively applying multi-type DA to visual RL training can lead to decreased performance.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: - The problem of hardness from DA in RL has already been addressed, so there is no novelty in tackling it again.
- It is necessary to also discuss the case of using more augmentation methods in CycAug.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and insightful feedback. We will address each of your comments and concerns below and also in our revised manuscript.
----
**Q1**: *"What is different in below (i), (iii), (iv) on the page 2. [...]"*
**A**: These three key findings correspond to the three sets of comparative experiments we conducted in Section 3.2 of the paper. While interconnected, each of the three findings offers distinct insights into constructing efficacious data augmentation for sample-efficient visual RL:
(i) Compared to other domains like supervised learning, the training process of visual reinforcement learning is more sensitive to increases in the hardness of data augmentation (DA). Through quantitative experiments, we find even minor rises in hardness significantly impair training performance. This underscores the unique requirements of visual RL for DA design.
(iii) While unlimited increases in strength diversity generally improve robustness in supervised learning and adversarial training, our experiments reveal this can actually harm performance in visual RL. While predominantly ascribed to the heightened hardness-sensitivity of reinforcement learning, this revelation likewise precipitates the judicious tuning of intricate data augmentation particulars to cater to the distinct demands of RL.
(iv) Although elevating type diversity is widely deemed effective for enhancing DA, we discover directly applying multi-type DA fusion schemes from other fields fails to improve sample efficiency in visual RL. This abnormal failure can be partly ascribed to the sensitivity of visual RL to the hardness of DA, but is also largely due to training instability from complex transformations or dynamic fluctuations in the data distribution that result from frequent switching between different types of DA operations. We demonstrate in `Figure 2 of the Response PDF` that the original fusion schemes introduce high variance into the Q value estimates during training.
---
**Limitation 1:** *"The problem of hardness from DA in RL has already been addressed, so there is no novelty in tackling it again."*
**A**: To the best of our knowledge, there are mainly two studies that focus on the problem of hardness from DA in visual RL - SVEA[1] and SECANT[2]. However, these two methods are both proposed to enhance training stability when applying **strong data augmentation**, which is imperative for improving generalization in visual RL, but will heighten the difficulty of training. SVEA performs by (1) using only weak augmentation PadCrop (called random shift in its original paper) to compute Q-Target values, and (2) mixing data of observations augmented by strong augmentations with those augmented by only weak augmentations. SECANT opts to only use weak augmentation during RL optimization, and then distills the obtained policy into a student agent using a teacher-student framework, introducing strong augmentations with high hardness during the distillation process.
The core idea of these two studies is to leverage weak augmentation such as PadCrop (named as random shift in their orginal papers) to alleviate extra data hardness/instability introduced by strong augmentation. However, the investigation conducted in `Section 3.2` of our paper demonstrates that even for weak augmentations, changes in their hardness level still significantly impact sample efficiency. Therefore, the problem of hardness from weak DA in visual RL remains an unresolved issue warranting further efforts.
In this paper, we provide fine-grained analyses of the impact of hardness on training and find that even minor increases in hardness can have a significant negative impact on training performance. Based on these insights, we designed an augmentation method called Rand PR that has lower hardness compared to PadCrop, the most prevalent augmentation used in current visual RL methods. Our comprehensive investigation into the impact of DA hardness on training sample efficiency elucidates the unique requirements of visual RL for DA and provides actionable guidelines for design. These contributions are substantial and non-trivial.
Additionally, we innovatively conducted a systematic study on the impacts of three different diversity aspects (strength, spatial, and type) on sample efficiency in visual RL. The obtained conclusions provide actionable guidelines for designing DA operations suitable for visual RL scenarios.
[1] Hansen N, Su H, Wang X. Stabilizing deep q-learning with convnets and vision transformers under data augmentation. NeurIPS 2021.
[2] Fan L, Wang G, Huang D A, et al. Secant: Self-expert cloning for zero-shot generalization of visual policies. ICML 2021.
---
**Limitation 2:** *"It is necessary to also discuss the case of using more augmentation methods in CycAug."*
**A**: CycAug is a multi-type DA fusion scheme that can handle more DA types, as long as these DA operations themselves have sufficient individual effectiveness. The reason we chose Rand PR and PadCrop as components for CycAug in the original paper is that these two augmentation methods demonstrate markedly superior effectiveness compared to other augmentations. However, CycAug can be further expanded by incorporating more DA operations beyond just Rand PR and PadCrop.
We conducted further experiments on the Quadruped Run task to demonstrate the effects of CycAug with three and four components, as shown in the table below. Training curves are illustrated in `Figure 3 (Left) of the Response PDF`. Three-component CycAug utilizing PC, PR, and CS attained superior sample efficiency versus the dual-component CycAug (PC+PR) presented in our paper, exhibiting the capacity for additional expansion of CycAug.
|Aug Component|Return|
|-|-|
|PC+PR | $728.6 \pm 64$ |
|CropShift (CS)|$536.5\pm 89$|
|Translate (Tr)|$467.4\pm 9$|
|PC+PR+CS|$783.9\pm 46$|
|PC+PR+Tr|$677.0\pm 16$|
|PC+PR+CS+Tr|$736.2\pm 62$|
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Summary: This paper explores the fundamental aspects of data augmentation (DA) in the context of visual reinforcement learning (RL) and introduces two methods, Random PadResize (Rand PR) and Cycling Augmentation (CycAug), to enhance its efficacy. Extensive experiments on the DeepMind Control suite and CARLA are conducted to showcase the superior performance achieved by the proposed methods.
Strengths: 1, This paper is well-motivated. The paper tackles a pressing and significant issue of sample efficiency in visual reinforcement learning (RL), which poses a crucial obstacle for the real-world implementation of RL agents across diverse domains.
2, The paper presents a comprehensive and meticulous analysis of how the attributes of data augmentation (DA), including hardness, diversity, and fusion schemes, affect the sample efficiency of visual reinforcement learning (RL). Moreover, the paper highlights the unique requirements of DA for visual RL, distinguishing it from other domains such as supervised learning or adversarial training.
3, The writing is clear and easy to follow.
4, The paper showcases the effectiveness of the proposed methods on two demanding benchmarks for visual reinforcement learning (RL): the DeepMind Control suite and CARLA. These benchmarks provide challenging environments that allow for a thorough evaluation of the proposed methods.
Weaknesses: 1, This paper overlooks the comparison of the proposed methods with other existing data augmentation (DA) techniques explicitly designed for visual reinforcement learning (RL), such as Spectrum Random Masking [11] or PlayVirtual [12]. It would be intriguing to observe and evaluate the performance of the proposed methods in relation to these techniques in terms of sample efficiency, generalization, or diversity. Incorporating such a comparison would provide a more comprehensive understanding of the strengths and limitations of the proposed methods within the context of existing approaches for visual RL.
2, The paper lacks ablation studies or qualitative analysis that elucidate the individual contributions of each component or attribute of the proposed methods to their effectiveness. For instance, it would be valuable to compare Rand PR with PadCrop or Translate, examining factors such as hardness or spatial diversity. Additionally, evaluating CycAug against other fusion schemes in terms of type diversity or data stability would provide further insights. Including such ablation studies and qualitative analysis would enhance the understanding of the proposed methods and their specific strengths relative to alternative components or attributes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and insightful feedback. We will address each of your comments and concerns below and also in our revised manuscript.
----
**W1**: *"[...] observe and evaluate the performance of the proposed methods in relation to these techniques (such as SRM and PlayVirtual) in terms of sample efficiency, generalization, or diversity. [...]"*
**A**: In visual RL tasks, using data augmentation is typically motivated by two goals: improving sample efficiency and enhancing generalization ability[1]. In this paper, we focus on investigating "which attributes enable effective DA for achieving sample-efficient visual RL?" and devise ways to harness the potential of DA in this regard. In addition to improving sample efficiency, there are also several works that aim to use DA to enhance the generalization ability of agents, such as SODA[2], SVEA[3] and SRM. These studies introduce stronger augmentations such as Overlay to improve generalization, which have been shown to be **inevitably detrimental to sample efficiency during training**[4]. The core contribution of SODA [2] and SVEA [3] lies in how to leverage weak augmentation to alleviate the negative impact of strong augmentation on the visual RL training process. Since the goal of this paper is to maximize the potential of DA for improving sample efficiency, we do not consider these strong augmentations that undermine sample efficiency as comparable targets. Note that our investigation and proposed methods are orthogonal to those DA methods aiming to improve the generalization ability of visual RL, and thus can be combined with them for further improvements.
Additionally, apart from methods like DrQ-V2 that only apply augmentation on the input without modifying other parts of the algorithm, there are also many works that combine DA with other self-supervised auxiliary tasks, such as PlayVirtual and SPR[5]. However, the latest work[6] demonstrates that adding explicit self-supervised learning tasks does not achieve higher sample efficiency compared to only applying DA as an implicit regularization. This further motivates our approach to focus on understanding and improving DA itself. This is also why we chose DrQ-V2 as our baseline and conduct in-depth research on DA based on it.
[1] Ma G, Wang Z, Yuan Z, et al. A comprehensive survey of data augmentation in visual reinforcement learning. arXiv preprint arXiv:2210.04561, 2022.
[2] Hansen N, Wang X. Generalization in reinforcement learning by soft data augmentation. ICRA 2021.
[3] Hansen N, Su H, Wang X. Stabilizing deep q-learning with convnets and vision transformers under data augmentation. NeurIPS 2021.
[4] Yuan Z, Yang S, Hua P, et al. RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization. arXiv:2307.10224, 2023.
[5] Schwarzer M, Anand A, Goel R, et al. Data-efficient reinforcement learning with self-predictive representations. ICLR 2021.
[6] Li X, Shang J, Das S, et al. Does self-supervised learning really improve reinforcement learning from pixels? NeurIPS 2022.
---
**W2-1:** *"[...] it would be valuable to compare Rand PR with PadCrop or Translate, examining factors such as hardness or spatial diversity."*
**A**: Rand PR demonstrates superior augmentation effects for sample-efficient VRL compared to PadCrop on the vast majority of tasks in DMC and CARLA of our paper, and PadCrop has already been proven to be a better DA type than Translate, Rotate, etc [7]. Next we will illustrate Rand PR's advantages from the perspective of Hardness and Spatial Diversity.
1. **Hardness:** Intuitively, by avoiding the inevitable edge information loss caused by other augmentation methods, Rand PR should achieve a lower level of hardness. To validate this, we experimentally measured the hardness of Rand PR and other DA methods including PadCrop on the CartPole Balance task. We pre-trained a clean policy on the unaugmented CartPole Balance task, then tested the performance of this policy under different augmentations and calculated the Hardness according to its definition. The results in the table below confirm our intuition: Rand PR induces markedly lower hardness compared to PadCrop and other DA types. Therefore, the intrinsically lower hardness of Rand PR contributes to its superior sample efficiency over other augmentations.
|Aug Type|Rand PR|PadCrop|Translate X|Translate Y|Rotate|
|:-:|:-:|:-:|:-:|:-:|:-:|
|Hardness|$1.33 \pm 0.44$|$1.73 \pm 0.55$|$2.17 \pm 0.64$|$2.29 \pm 0.80$|$2.65 \pm 0.67$|
2. **Spatial Diversity:** Rand PR offers sufficient spatial diversity through its large degrees of freedom in scaling ratio and content location. Compared to PadCrop and Translate, Rand PR introduces not only positional transformations, but also varying scaling ratios, in addition to positional transformations. How to quantitatively compare the spatial diversity of different augmentation methods is an open question worth further investigation in the future.
[7] Kostrikov I, Yarats D, Fergus R. Image Augmentation is All You Need: Regularizing Deep Reinforcement Learning from Pixels. ICLR 2021.
---
**W2-2:** *"[...] evaluating CycAug against other fusion schemes in terms of type diversity or data stability would provide further insights."*
**A**: The type diversity of multi-type DA fusion schemes depends on the components being fused, thus it is infeasible to directly compare the type diversity across different fusion schemes. However, data stability when using the same DA components is a good way to characterize different fusion schemes' properties. To compare data stability, we evaluated the variance of Q values and Q-Target values during training when applying CycAug and other fusion methods on the Quadruped Run task. As shown in `Figure 2 of the Response PDF`, CycAug consistently induces lower Q value variance, empirically demonstrating its superior stability. A more stable training process contributes to CycAug's better sample efficiency.
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Summary: This paper explores the conditions for achieving sample-efficient visual RL with data augmentation. Based on the findings, two guidelines are proposed: one emphasizes sufficient spatial diversity with minimal hardness, leading to the introduction of Rand PadResize. Additionally, the data-sensitive nature of RL training is considered when designing multi-type data augmentation fusion schemes in visual RL. Drawing inspiration from this guideline, a RL-tailored multi-type data augmentation fusion scheme CycAug is proposed.
Strengths: The paper exhibits strong writing quality and clear communication of ideas. The content is well-structured and effectively presents the research findings. The authors' explanations and descriptions are concise, enabling easy comprehension of the key concepts and methodologies discussed. Overall, the paper demonstrates a high level of writing proficiency.
The paper presents a comprehensive analysis of effective augmentation techniques for visual reinforcement learning (RL). The authors delve into the topic with depth, examining the hardness and diversity of various augmentation methods and their impact on the performance of visual RL algorithms. The analysis is thorough, providing valuable insights into the benefits and limitations of different augmentation strategies in enhancing visual RL.
Weaknesses: 1、 To obtain specific values for "Strength D" and "Spatial D," it is necessary to train a strategy on raw data before estimating diversity, as mentioned in line 137. Does this imply that training the strategy using the original, unmodified data precedes the selection of suitable augmentation techniques?
2、 The author asserts that CycAug promotes stability throughout the training process, which raises curiosity about the variance of Q values during training. It would be beneficial for the authors to provide further analysis of the stability results. Additionally, considering that SVEA is another method to enhance stability in the context of augmentation, it would be valuable to compare the advantages of both methods when utilizing the same augmentations.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1、 Can CycAug effectively handle more than three augmentation types? The paper does not provide specific information about the scalability of CycAug beyond three augmentation types. Further investigation or experimentation would be necessary to determine the feasibility and performance of CycAug when applied with a larger number of augmentation types.
2、 Does unlimited spatial diversity lead to an increase in the level of hardness?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: What are the limitations of the proposed method? I know there is something about this in the paper. But it is not solid and broad.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and insightful feedback. We will address each of your comments and concerns below and also in our revised manuscript.
----
**W1**: *"Does this imply that training the strategy using the original, unmodified data precedes the selection of suitable augmentation techniques?"*
**A**: No, pre-training a policy on clean data is not a prerequisite for selecting effective data augmentations in practice.
According to the definition of Hardness (${\rm{Hardness}}={\mathcal{R}(\pi, \mathcal{M})}{\big /}{\mathcal{R}(\pi, \mathcal{M}^{\rm{aug}})}$), calculating the Hardness of a certain DA operation requires first training a clean policy $\pi$ on the unmodified environment $\mathcal{M}$, and then testing the average return $\mathcal{R}\left(\pi, \mathcal{M}^{\rm{aug}}\right)$ of this clean policy on the augmented environment $\mathcal{M}^{\rm{aug}}$. However, for most challenging visual RL tasks, we cannot train an effective policy on the clean, unaugmented environment without using DA. Therefore, when using DA on practical complex tasks, we cannot pre-train a clean policy and then calculate the DA's Hardness according to the definition.
Consistent with previous studies [1], we observe a **strong linear positive correlation** between the hardness level and strength of individual DA operations in visual RL (shown in `Figure 11 in Appendix B.1` of our paper). Hence, we can control the **Hardness level and Strength Diversity** of DA by adjusting the average strength and allowed strength ranges of individual operations. In addition, **Spatial Diversity** can be manipulated by defining a set of allowable degrees of freedom spatial operations (for example, the allowed translation directions in the Translate operation).
[1] Lin Li et al. Data augmentation alone can improve adversarial training. ICLR 2023
---
**W2-1:** *"[...]the variance of Q values during training[...]"*
**A:** In `Figure 2 of the Response PDF`, we compared the variance curves of Q values and Q-Targets during training when applying CycAug versus other multi-type DA fusion methods. CycAug demonstrates notably superior data stability, which is crucial during the training process of visual RL.
---
**W2-2:** *"[...]compare the advantages of both methods (CycAug and SVEA) when utilizing the same augmentations."*
**A:** SVEA aims to enhance training stability when applying **strong data augmentation**, which is imperative for improving generalization in visual RL, but will heighten the difficulty of training. Its main approaches include (1) using only weak augmentation PadCrop (called random shift in its original paper) to compute Q-Target values, and (2) mixing data of observations augmented by strong augmentations with those augmented by only weak augmentations to reduce the difficulty of learning from augmented data. The core idea of these two approaches is to leverage weak augmentation to alleviate data instability introduced by strong augmentation.
Unlike SVEA, the DA operations explored in our paper are weak augmentations from SVEA's perspective, which aim to improve sample efficiency during training rather than improve generalization by incorporating prior knowledge. The instability issue that CycAug aims to address occurs when using multiple different weak DAs simultaneously during training, which has the potential to further improve sample efficiency due to higher type diversity but also introduces additional data instability to the training process. Note that CycAug's approach for handling multi-type weak DA fusion can be further combined with SVEA's scheme for strong DA to jointly harness the utilities of both.
---
**Q1**: *"Can CycAug effectively handle more than three augmentation types?"*
**A**: CycAug is a multi-type DA fusion scheme that can incorporate far more than three DA types, as long as these DA operations themselves have sufficient individual effectiveness. We conducted further experiments on the Quadruped Run task to demonstrate the effects of CycAug with three and four components, as shown in the table below. Training curves are illustrated in `Figure 3 (Left) of the Response PDF`. Three-component CycAug utilizing PC, PR, and CS attained superior sample efficiency versus the dual-component CycAug (PC+PR) presented in our paper, exhibiting the capacity for additional expansion of CycAug.
|Aug Component|Return|
|-|-|
|PC+PR | $728.6 \pm 64$ |
|CropShift (CS)|$536.5\pm 89$|
|Translate (Tr)|$467.4\pm 9$|
|PC+PR+CS|$783.9\pm 46$|
|PC+PR+Tr|$677.0\pm 16$|
|PC+PR+CS+Tr|$736.2\pm 62$|
---
**Q2**: *"Does unlimited spatial diversity lead to an increase in the level of hardness?"*
**A**: Thanks for your very valuable question. In the original paper, we experimentally validated that the hardness level of individual DA operations is highly linearly correlated with the strength of their transformations. Thus we naturally inferred that the hardness remains unchanged as long as the augmentation strength is not altered. To validate this inference, we conduct a quantitative analysis of the DA's hardness on the CartPole Balance task. We manipulate the level of spatial diversity by specifying different spatial degrees of freedom with the same strength. We pre-trained a clean policy on the unaugmented CartPole Balance task, then tested the performance of this policy under different DAs and calculated the Hardness according to the definition. The results in the table below demonstrate that three different augmentation methods exhibit consistent hardness across varying levels of spatial diversity.
| Spatial Diversity Level | 1 | 2 | 4 | 8 | w/o Limited |
|-|-|-|-|-|-|
| CropShift-HD|$1.50 \pm 0.59$|$1.52 \pm 0.66$|$1.56 \pm 0.75$|$1.48 \pm 0.42$|$1.53 \pm 0.51$|
| PadResize-HD|$1.32 \pm 0.58$|$1.29 \pm 0.72$|$1.36 \pm 0.68$|$1.34 \pm 0.51$|$1.33 \pm 0.44$|
| Translate-HD|$2.25 \pm 0.71$|$2.27 \pm 0.68$|$2.24 \pm 0.79$|$2.30 \pm 0.82$|$2.28 \pm 0.63$|
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | Rebuttal 1:
Rebuttal: # Global Response
---
Dear reviewers,
We are sincerely appreciative of the time and effort you dedicated to reviewing our manuscript. Your comprehensive feedback has offered us valuable insights for enhancing clarity and quality. We have individually responded to each reviewer's queries and suggestions. Here, we want to provide a few comments on the common concerns and highlight supplementary experiments included in the `response PDF`. We eagerly look forward to engaging in further discussions with reviewers to address any remaining concerns.
---
**Evaluation on more challenging visual RL tasks.**
In addition to the evaluations presented in the paper on DMC and CARLA, we further conduct evaluations on two challenging tasks, Lift and TwoArmPegInHole, in the table-top manipulation environments of **Robosuite**. We report the average episode return over 5 random seeds after training for 500k frames (with 2 action repeat) and present the complete 1M frames training curves in `Figure 1 of the Response PDF`. The experimental results demonstrate that CycAug achieves higher sample efficiency than the original DrQ-V2.
| Augmentation Type | Lift | TwoArmPegInHole |
|-------------------|------|-----------------|
| DrQ-V2 |$253.9\pm 125$|$295.2\pm 56$|
| CycAug |$324.5\pm 120$|$350.4\pm 53$|
---
**Demonstration of data stability during CycAug training.**
To compare data stability, we evaluated the variance of Q values and Q-Target values during training when applying CycAug versus other fusion methods on the Quadruped Run task. Variance is consistently assessed through 4 forward passes, employing random data augmentation on identical observations. As shown in `Figure 2 of the Response PDF`, CycAug consistently induces lower Q value variance, empirically demonstrating its superior stability. A more stable training process contributes to CycAug's better sample efficiency.
---
**More DA components incorporated in CycAug**
CycAug is a multi-type DA fusion scheme that can incorporate far more than three DA types, as long as these DA operations themselves have sufficient individual effectiveness. We conducted further experiments on the Quadruped Run task to demonstrate the effects of CycAug with three and four components, as shown in the table below. Training curves are illustrated in `Figure 3 (Left) of the Response PDF`. Three-component CycAug utilizing PC, PR, and CS attained superior sample efficiency versus the dual-component CycAug (PC+PR) presented in our paper, exhibiting the capacity for additional expansion of CycAug.
|Aug Component|Return|
|-|-|
|PC+PR | $728.6 \pm 64$ |
|CropShift (CS)|$536.5\pm 89$|
|Translate (Tr)|$467.4\pm 9$|
|PC+PR+CS|$783.9\pm 46$|
|PC+PR+Tr|$677.0\pm 16$|
|PC+PR+CS+Tr|$736.2\pm 62$|
---
**More training steps to compare the final performance**
In order to ensure the convergence of algorithm performance, we increased the allowed training steps to twice their original values in four representative DMC tasks. We illustrate the average episode return after $3\times 10^6$ frames in the below table. The complete training curves are depicted in `Figure 3 (Right) of the Response PDF`. CycAug achieves faster convergence than DrQ-V2 across all tasks and demonstrates higher final performance on Quadruped Run and Walker Run.
| Task | | DrQ-V2 | | CycAug |
|----------------|---|--------|---|--------|
| Quadruped Run | |$694.7\pm 134$| |$840.6\pm 57$|
| Walker Run | |$699.4\pm 44$ | |$772.6\pm 17$|
| Quadruped Walk | |$920.3\pm 13$ | |$937.1\pm 13$|
| Hopper Hop | |$358.7\pm 47$ | |$366.5\pm 65$|
Pdf: /pdf/ec8bdafc831799d12589697c33b6cad99e3eb137.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a thorough empirical analysis of visual data augmentations and their effects on RL training. They benchmark various spatial augmentation on two axes of variation, spatial diversity, and hardness, measured by the amount of distortion created in the image. The authors perform a series of experiments to benchmark the effect of augmentations along these two axes and propose best practices for training visual RL policies. Additionally, the authors offer a new data augmentation named Random Pad Resize and empirically demonstrate its benefits. They also propose a multi-DA fusion scheme, named CycAug which boosts sample efficiency even further and prevents training instability.
Strengths: 1. Works like these perform a systematic empirical analysis of a known technique to help bring the community on a common page about its usage are very useful.
2. Proposal of the CycAug method for multi-DA fusion, is a simple, but clever idea, and in combination with RandPR shows state-of-the-art sample efficiency on two benchmarks.
3. The paper is well-written and easy to follow.
4. The authors have provided code and an extensive discussion of their experimental setup in the supplementary material.
Weaknesses: 1. The combination of RandPR and CycAug shows improved performance over previous methods. I failed to find any analysis that isolates each part and analyzes its impact on existing methods.
2. Given that the major contribution of this work is a thorough empirical analysis of existing data augmentation strategies, adding breadth to the experiments and including Embodied environments like Habitat or AI2THOR, or manipulation benchmarks like MetaWorld and studying the effect of augmentations would make this paper even better. Note that the absence doesn't make the work any less useful.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would be happy to hear the authors' thoughts on the points mentioned in the weakness section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I would encourage the authors to include a section in the main paper or supplementary about the potential societal impacts of their work. The section is currently which may or may not be against the Neurips policy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and thorough review. We added the suggested experiments to the `Response PDF`. In the following, we seek to address each of your concerns.
----
**W1:** *"The combination of Rand PR and CycAug shows improved performance over previous methods. I failed to find any analysis that isolates each part and analyzes its impact on existing methods."*
**A**: It is hoped that the following explanatory and supplementary experiments can more precisely delineate the individual effectiveness of Rand PR and CycAug.
1. Rand PR is an **individual DA operation** like Translate, Rotate, etc. It can be considered as an improvement of the DA method used in the original DrQ-V2, providing ample spatial diversity while ensuring a low level of hardness. We have conducted extensive experiments on DMC and CARLA tasks to demonstrate that Rand PR achieves higher sample efficiency compared to the DA operation (PadCrop) used in DrQ-V2, as evidenced in `Figures 8 and 9` of our paper. We report here again a comparison of the average performance of Rand PR and PadCrop on DMC and CARLA after limited iteration steps. Note that the superior effectiveness of Rand PR as an individual DA operation is entirely independent of CycAug.
| Augmentation Type | DMC @ 1500k Frames | CARLA @ 100k Steps |
|--------------------|--------------------|--------------------|
| CropShift (DrQ-V2) |$547.96$|$99.7$|
| Rand PR |$588.75$|$110.8$|
2. CycAug is a fusion scheme that aims to combine multiple different DA operations together to achieve higher type diversity while ensuring data stability during training. As a fusion scheme, the effectiveness of the individual DA operations incorporated in this fusion method determines the baseline performance of the method after fusion, as shown in `Figure 5` of our paper. This is why we selected PadCrop and Rand PR as components for CycAug in this paper, as these two DA operations demonstrated markedly superior individual effectiveness compared to other DA. However, this does not imply that CycAug must rely on Rand PR, nor that CycAug can only fuse two DA operations. In fact, any high-performing individual DA can be incorporated into the CycAug scheme. As illustrated in the following table, combining PC with PR and CS using CycAug can achieve better performance than using them individually. However, since Tr has poor individual effectiveness, combining it with PC actually decreases the augmentation effect of PC.
| Augmentation Type | Return || Aug Component | Return |
|-|-|-|-|-|
| PadCrop (PC) | $570.9\pm 121$ | | | |
| Rand PR |$602.3\pm 96$|| PC+PR |$728.6 \pm 64$|
| CropShift (CS) |$536.5\pm 89$|| PC+CS |$586.6\pm 83$|
| Translate (Tr) |$467.4\pm 9$ || PC+Tr |$545.7\pm 99$|
Furthermore, we trialled a three-component CycAug utilizing PC, PR, and CS. As depicted in `Figure 3 (Left) of the Response PDF`, this configuration attained superior sample efficiency on Quadruped Run versus the dual-component CycAug (PC+PR) presented in our publication, exhibiting the capacity for additional expansion of CycAug.
|Aug Component|Return|
|-|-|
|PC+PR | $728.6 \pm 64$|
|PC+PR+CS|$783.9\pm 46$|
|PC+PR+Tr|$677.0\pm 16$|
|PC+PR+CS+Tr|$736.2\pm 62$|
---
**W2:** *"[...]adding breadth to the experiments and including Embodied environments like Habitat or AI2THOR, or manipulation benchmarks like MetaWorld and studying the effect of augmentations would make this paper even better."*
**A**: Thank you for your timely suggestion. Adding more challenging experiments can indeed help us better demonstrate the efficacy of our proposed method. Beyond evaluations on the prevalent DMC benchmarks, we have implemented comprehensive experiments on the more practical autonomous driving task CARLA within the original paper. As a supplement, we have conducted evaluations on two challenging tasks, Lift and TwoArmPegInHole, in the table-top manipulation environments of Robosuite. We report the average episode return over 5 random seeds after training for 500k frames (with 2 action repeat) and present the complete 1M frames training curves in `Figure 1 of the Response PDF`. The experimental results demonstrate that CycAug achieves higher sample efficiency than the original DrQ-V2.
| Augmentation Type | Lift | TwoArmPegInHole |
|-------------------|------|-----------------|
| DrQ-V2 |$253.9\pm 125$|$295.2\pm 56$|
| CycAug |$324.5\pm 120$|$350.4\pm 53$|
In addition to Robosuite, we also attempt to evaluate our method in the indoor visual navigation environments of Habitat and will make efforts to expand the Habitat experimental results in the future version of the paper.
---
Rebuttal Comment 1.1:
Title: Supplementary Experiments in Habitat
Comment: We appreciate the reviewer's earlier suggestion to conduct further evaluations on embodied environments. Habitat presents a challenging indoor visual navigation task and is thus well-suited for further validating the efficacy of our proposed methods. In the table below, we present a comparison of success rates between CycAug and DrQ-V2 after varying training frames, across 5 seeds. Please accept our apologies for the delayed response, as additional experiments require substantial time to run. We hope our responses address your concerns satisfactorily, and we welcome any further discussion you may wish to have.
| || DrQ-V2|| CycAug |
|------|-------------|-----|-|-----------------|
| Success Rate @ 200k Frames||$0.37\pm 0.11$||**$0.48\pm 0.13$**|
| Success Rate @ 300k Frames||$0.78\pm 0.09$||**$0.85\pm 0.08$**|
---
Rebuttal 2:
Title: Can you please check the rebuttal comments?
Comment: Dear reviewer,
The authors have provided a response to your comments. Can you please take a look and accordingly comment, and updated your review?
Thanks,
-Area Chair | null | null | null | null | null | null |
Is Learning in Games Good for the Learners? | Accept (spotlight) | Summary: The paper studies several trade-offs that existed in learning in games. The first is the tradeoff between regret and rewards, for which a generalized notion of equilibria is introduced. It is then investigated whether running a no-swap regret learning algorithm is efficient. It is shown that this depends on the form of the games, in some games running a no-swap regret learning algorithm can be more efficient than employing the Stackelberg strategy. The same question is investigated for learning in game against a no-external regret learning algorithm.
Strengths: Overall, the paper is really well-written and well-organized. The problem investigated and the results presented are interesting. The generalized notion of equilibria seems to be useful for other analyses of learning in games. It is also important to study the performance of different types of algorithms in games, (such as mean-based no-regret learning algorithms), these results can lead to further insights into algorithm designs for learning in games.
Weaknesses: Overall the paper is pretty solid, a rather minor weakness is that the presentation of the paper can still be improved.
There are quite a number of results presented in the paper. As a result, section 1.1 is a really lengthy section. It seems like the authors are trying to summarize the question investigated, related works, and the obtained results in this section. This can lead to some confusion as some of the notions are yet to be introduced in the paper and this is still at the very beginning of the paper. I would suggest making the section more concise and putting some of the discussion in the later parts of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Could the authors elaborate on why exponential weights, FTPL etc algorithms are mean-based? Also, Theorem 4 seems to be stated with respect to average reward, while Theorem 5 seems to be saying that a mean-based algorithm cannot attain the total rewards (which seems to be not surprising?), I wonder how these two Theorems should be interpreted together.
2. It is mentioned that Proposition 5 can be improved, though the query complexity is still inversely proportional to the best response region volume. But from Theorem 6, it seems that through stimulating the best response queries, the complexity is independent of the best response region volume.
3. It seems to be that mean-based algorithms often have no external-regret. I wonder if they can be also no swap regret? If so, how should one interpretate Theorem 7?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments! We will certainly take another pass on clarifying the narrative in the introduction, and can move some of the discussion of results to later in the paper.
In terms of mean-based algorithms, all of these algorithms resemble approximate/smoothed versions of Follow the Leader, in that actions are almost never chosen unless they are close to historically optimal (see Appendix D of [BMSW17] for a more thorough analysis).
We apologize for the confusion regarding Theorems 4 and 5; both of these theorems can be interpreted either in terms of total or average reward (by multiplying/dividing by $T$), and we will change the Theorem 5 statement for consistency. The takeaway for Theorem 5 is that all mean-based algorithms are strictly stronger than “worst-case” no-regret algorithms for certain games, and thus are harder to exploit (and an average reward of $\text{Val}_A (\emptyset, \mathcal{E})$ cannot be approached).
As for Proposition 5, the exact BR query complexity for Stackelberg equilibria depends on a number of different parameters related to the structure of the game, and there are some technical intricacies we avoid discussing in the body for simplicity; to summarize, all algorithms which obtain accuracy $\epsilon$ with $\text{poly}(1/\epsilon)$ queries require that the best response regions have volume at least $1/\text{poly}(1/\epsilon)$. However, in the query model the accuracy of these algorithms can be boosted to e.g. $\log(1/\epsilon)$ (by binary searching), yet the relationship between query complexity and volume is still fixed (finding initial feasible points is the bottleneck). We are unable to take advantage of this accuracy boost in our setting due to the imprecision inherent in the learning setting. As such, we make use of the weaker bound so as to align with the error terms resulting from learning. Once the query complexity is established to be $\text{poly}(1/\epsilon)$, Theorem 6 only requires that the volume is at least $1/\text{exp}(1/\epsilon)$, but we still need Assumption 2 nonetheless to establish the query complexity. We will clarify this discussion in the paper.
Finally, while many mean-based algorithms are no-(external)-regret, results from [DSS19] imply that no mean-based algorithms can be no-swap-regret (or no-internal-regret) when there are more than two actions (via an example of a 3-action game where the maximum reward against any no-swap learner is strictly below that obtainable against any mean-based learner). Note that the definitions of external and internal regret coincide for only 2 actions.
[BMSW17] Selling to a No-Regret Buyer - Braverman, Mao, Schneider, Weinberg
[DSS19] Strategizing against No-regret Learners - Deng, Schneider, Subramanian | Summary: This submission studies questions surrounding playing against a no-regret learner in a repeated game setting. These questions are motivated by previous observations that while it is known that when all players play no-regret strategies the empirical frequency of play approaches an equilibria, a player can sometimes do better by deviating to a different strategy/algorithm (that is not no-regret).
In particular, the authors focus on four questions: (1) When does reward trade off with regret? (2) Under what game settings is playing a no-swap-regret algorithm a stable equilibrium? (3) How to play against a no-regret learner? (4) How can one learn the Stackelberg strategy through repeated play against a no-regret learner?
Towards answering question (1), the authors consider a generalized notation of equilibrium, (Phi-A, Phi-B)-equilibrium, where player A is restricted to playing no-Phi-A-regret strategies and player B is restricted to playing no-Phi-B-regret strategies. Each choice of (Phi-A, Phi-B) induces a polytope of (Phi-A, Phi-B)-equilibria. They show that for any equilibria in this polytope, there exists a pair of no-regret algorithms in (Phi-A, Phi-B) which converge to it.
To assist in answering question (2), the authors consider a "metagame", in which at the beginning of a repeated game, both players simultaneously announce and commit to an algorithm to use during the game. The authors main result towards answering (2) are a set of sufficient and necessary conditions for (a) some pair of no-swap-regret algorithms to form a Nash equilibrium in the metagame and (b) all pairs of no-swap regret algorithms to form a Nash equilibrium in the metagame. In an effort to characterize which types of games playing no-swap-regret algorithms is "optimal", the authors show that if a game G does not contain a pure Nash equilibria, then there does not exist a pair of no-swap-regret algorithms which form a Nash equilibrium in the metagame, when player utility functions in G are randomly perturbed.
Towards answering (3), the authors show that there exists a no-(external)-regret algorithm for player B and a strategy for player A such that the average reward for player A converges to their best possible feasible reward. However, the authors show that against any mean-based no-regret learner (a popular subset of no-external-regret algorithms), there does not exist a strategy for player A which can get "close to" the best possible feasible reward.
Finally, the authors answer (4) by showing how to learn the Stackelberg strategy by simulating best-response queries against the no-regret learner. While they show that in general this may require exponentially-many queries, polynomially-many queries are sufficient if the no-regret learner is playing a no-adaptive-regret algorithm.
Strengths: While the authors are not the first to consider the general problem of playing a repeated game against a no-regret learner, this paper both introduces and addresses a (very) wide range of important and well-motivated questions surrounding the topic. The results contained in this submission provide valuable insights into this highly nuanced problem. While no one result stands out in particular, the sheer breadth of the results obtained by the authors in this submission is very impressive and the submission as a whole presents the clearest picture to-date of "the right thing to do" when playing against a no-regret learner.
Weaknesses: With that being said, the breadth of the results obtained by the authors makes it unclear what the main takeaway of the submission should be. At times, the submission reads like a laundry list of results about playing against a no-regret learner. Additionally, a longer discussion on related works in the main body (particularly (10) and (22)) would help someone who is not as familiar with the area better understand the main contributions of the authors.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: In Section 3, why is Nash equilibrium the "right" solution concept for the metagame?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! Indeed, we view our results as indicating that the answer to the question of “what should you do when playing against a no-regret learner?” is very much “it depends”, and we aim to explore several branches of this decision tree (is their algorithm known? is the game known? etc.). A second-order takeaway is perhaps that the Stackelberg value is often a “reasonable” benchmark to target. We will certainly use the additional provided page in the camera-ready version to further clarify the relevant background and the narrative thread of our results.
As for the meta-game, the question of alternate possible solution concepts is definitely an interesting one. We focused on Nash equilibria because the meta-game is essentially a “one-round game” where players act independently (by committing to an algorithm/strategy), and here Nash equilibria exactly capture whether players have an incentive to deviate up-front from committing to a specific learning algorithm.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I have read the authors' rebuttal. | Summary: The paper explores tradeoffs between reward and regret in repeated gameplay between two agents. It introduces a concept of generalized equilibrium that allows for different regret constraints, resulting in feasible values for each agent. The paper shows that such equilibria can be reached by algorithms maintaining regret guarantees against any opponent. The paper also examines tradeoffs in terms of the opponent's algorithm choice and characterizes the maximal reward achievable against a no-regret learner. It demonstrates that different classes of no-regret algorithms can lead to varying rewards.
Strengths: - Theoretical analysis is solid and convincing.
- The problem studied is interesting. Although running no-regret dynamics leads to CCE, no work attempts to doubt "running no-regret" this thing itself.
Weaknesses: I think some realistic running examples can be supplemented for better illustration.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See in Weaknesses
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See in Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments! While our focus for the paper is intended to be primarily theoretical, in the appendix we give examples of games where we show explicit separations between feasible equilibrium values, and we are happy to explore simulating algorithms on these games. | Summary: This paper addresses several interesting questions regarding the tradeoff between reward and regret in repeated gameplay between two agents. Three problems are sequentially investigated. 1. The paper provides a characterization of the setting when running a no-swap-regret learning algorithm is preferred over playing the Stackelberg strategy; it further showed that such a setting has measure zero and almost does not happen. 2. This paper shows that if the opponent is running any no-regret algorithm, the utility of the player is upper bounded by the unconstrained-external value of the game; such an upper bound is achievable for a particular no-regret algorithm of the opponent. 3. The paper shows that it is possible to convert any best-response query algorithm for finding Stackelberg equilibria via best-response queries to an adaptive strategy that learns Stackelberg equilibria via repeated play against a generic no-regret learner, albeit potentially at the cost of an exponential blow-up in the number of rounds.
Strengths: The several questions addressed in this paper are very interesting. In algorithmic game theory, many existing work focus on algorithms for finding an equilibrium via repeated gameplay. However, in repeated gameplay, the agent’s interest is often maximizing reward and has the motivation to deviate from the regret minimization algorithm. This paper studies when deviating from the regret minimization algorithm is beneficial and when it is not.
Weaknesses: There are still many unsolved open questions. For example, for specific no-regret algorithm classes, it is not clear how much one can exploit.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: NA
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review! Indeed there are several remaining open questions that we think would be interesting to explore in future work. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper considers equilibria between agents that have arbitrary regret benchmarks (corresponding to different equilibrium concepts), and the relation between those equilibria and the interactions of no-regret learning agents. It is known in the literature that an agent who knows that the other is playing a no-external-regret algorithm can guarantee themself the Stackelberg leader value (i.e., $Val_A(\emptyset,\mathcal{E})$ in the terminology of this paper). This paper extends that results by showing that in fact it can be better to play a no-external-regret learning strategy against a no-swap-regret learner (i.e., a learning algorithm with a weaker regret guarantee can get higher utility, all else being equal). A further result is that any generalized equilibrium is reachable by a pair of regret-minimizing learning strategies.
The paper also considers the complexity of learning Stackelberg strategies; it turns out to be easier to learn a Stackelberg leader strategy against a learning algorithm with a stronger regret guarantee, because it reacts more quickly to best-respond to changes in the other player's behavior.
Strengths: This is an extremely strong paper. The question of when minimizing regret benchmarks also lead to good performance (the thing that we actually care about, in general) is really important, and this paper provides rigorous, compelling, and general answers. The generalized equilibrium framework and the connections between learning and equilibrium are very clear and likely to have a significant impact on the learning in games literature. I strongly expect to refer to this paper in the future.
The complexity results at the end are a nice touch as well.
Weaknesses: I have no major complaints about the paper. Here are some minor comments/issues:
- I was initially very surprised by Theorem 1; a little more hand-holding about why this doesn't contradict Barman & Ligett [2015] might have helped. (Basically, the result doesn't require the algorithm pair to _find_ an optimal equilibrium, instead it will converge without regret to an exogenously _specified_ equilibrium).
- p.6: "Further, each equilibrium set can be optimized over via a linear program.": I'm not sure what this means.
- p.8: "Let $\sigma_{i,t}$ be the cumulative reward resulting from playing action $i$ for the first $t$ rounds": It would be clearer to avoid the notational collision with strategies by using a different letter
- p.8: A few more details in the proof sketch for Theorem 5 would be helpful; it took me a while to convince myself even that the statement made sense.
[Barman & Ligett 2015]: "Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: none
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments!
You are correct that Theorem 1 applies to exogenously specified equilibria, and does not require “learning” the target equilibrium on the fly. However, our focus is on two-player games, for which it is also possible to efficiently optimize over each of the equilibrium sets we consider via linear programming (as we show in Proposition 3). Even for multiplayer games, one can always compute an “optimal” (e.g. in welfare, or for any player) (C)CE via a linear program which is polynomial in the size of the game, with variables for the probability of each action profile and linear constraints for each player-action(-action) pair which enforce regret constraints. The size of this LP is polynomial in {# actions} but exponential in {# players}, as the normal-form representation of a many-player game has exponentially many {# players}. In contrast, the results of [BK15] apply to “succinct” games with structured rewards (e.g. routing games, see [PR08] for many other examples) in which rewards are fully determined by a representation which is polynomial in {# players, # actions}, and as such are not in contradiction with the LP approach which uses a normal-form representation.
We will clarify this in the paper, and will also fix the notation overload (thanks for catching this!) as well as further clarify the intuition behind Theorem 5; the key idea there is showing that mean-based algorithms are strictly stronger than “worst-case” no-regret algorithms, and thus are harder to exploit.
[PR08] Computing correlated equilibria in multi-player games - Papadimitriou & Roughgarden
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional clarifications! | null | null | null | null | null | null |
IEBins: Iterative Elastic Bins for Monocular Depth Estimation | Accept (poster) | Summary: The paper introduces an Iterative Elastic Bins (IEBins) approach for monocular depth estimation. Many conventional monocular depth estimation approach uses a soft-argmax representation (Eq. (4)) that sums the product between depth probability and pre-defined depth bins. However, the large number of pre-defined bins hinders model convergence (@L.43). To alleviate the problem, the paper proposes to use adaptive ranges of depth bins for each pixel based on previous intermediate depth estimates. Along with the number of iteration steps, the depth ranges get smaller for fine-detail estimation (yet, taking uncertainty into account). The paper shows empirically good accuracy over previous methods.
Strengths: + Good results
Table 1, 2, 3, and 4 show that the method achieves better accuracy than published methods in the public benchmark datasets (KITTI Eigen, KITTI, NYU-Depth-v2, and SUN RGB-D).
+ Good ablation study
Table 5 and 6 demonstrates how each design choice affects/improves the accuracy. Table 5 shows the strength of the proposed IEBin idea over previous approaches (UBins, SIBins, AdaBins, and IBins). Table 6 also justifies the design choice of the number of bins.
Weaknesses: - Question on the number of stages (Fig. 3)
The method uses 6 stages (iteration steps) in total. I wonder what happens if it uses more iteration steps during the inference (i.e., 6 iterations during training, but 8 or 10 iterations during test time). Will the method keep improving the accuracy with more iteration steps? Or, even during training, what if it uses more iteration steps other than 6? How the number 6 is set?
- Possibly an unfair comparison? Table 5, Comparison with different bin types.
I wonder if other methods (UBins, SIBins, AdaBins, and IBins) also use the iterative optimizer. For a fair comparison, I think it's also good to prepare a baseline with the iterative optimizer and only change the depth bin types. Then it can differentiate the performance gain of bin types from the iterative optimizer.
Despite the good results, I would like the paper Borderline Reject for now, mainly due to the possible unfair comparison in Table 5. The source of gain is not so clear if it's from the iterative optimizer or the depth bin types.
----
I share the same concern with Reviewer 4WPj that the main contributions don't seem to be so strong. The iterative refinement ideas have been proposed in other literature (e.g., optical flow, SfM), and the paper demonstrates empirical accuracy gain. That said, I am raising the score to Borderline Accept as the authors' response resolves my main concerns. I hope all the comments and concerns during the discussion phase are reflected in the updated version.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In Fig. 2 the three depth maps on the right look the same. Probably it may need to be changed to the actual experiment results.
- In the last example (chair) in Fig. 5, the object boundary near the chair is still blurry. (Just out of curiosity) I wonder if the proposed representation can resolve this blurry object boundary problem because it limits the depth range, so I would expect it outputs much clearer depth around object boundaries.
- In Table 7, what is the source of the faster inference time than BinsFormer?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper didn't include its limitations or societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer for the constructive feedback and comments.__
### _W1: Question on the number of stages (Fig. 3)_
A1: During both training and inference phases, we find that when the number of stages exceeds 6, the performance changes very little. As we know, more stages require longer training and inference time, greater memory consumption. Hence, we set the number of stages to 6.
### _W2: Possibly an unfair comparison? Table 5, Comparison with different bin types._
A2: We apologize for not stating the settings clearly. Because UBins, SIBins and AdaBins are non-iterative methods, we do not add iterative optimizer for them. On the other hand, Baseline + AdaBins (276M) has more parameters than Baseline + IEBins (273M) because AdaBins requires an additional Transformer architecture to generate adaptive bins. IBins is a variant of IEBins and is acquired by replacing the elastic target bin with original target bin at each stage. We have used iterative optimizer for IBins when reporting its results in Table 5. To further elaborate results, we take well-based AdaBins as an example and feed its adaptive depth candidates into iterative optimizer. The corresponding results of Abs Rel, RMSE, log10, $\delta <1.25$, $\delta <1.25^2$, $\delta <1.25^3$ are 0.089, 0.321, 0.038, 0.932, 0.991, and 0.998 on the NYU-Depth-v2 dataset, which are worse than those of Baseline + IEBins (0.087, 0.314, 0.038, 0.936, 0.992, and 0.998).
### _Q1: Probably it may need to be changed to the actual experiment results for three depth maps in Fig.2._
A3: We will modify our paper according to this nice advice.
### _Q2: In the last example (chair) in Fig. 5, the object boundary near the chair is still blurry. (Just out of curiosity) I wonder if the proposed representation can resolve this blurry object boundary problem because it limits the depth range, so I would expect it outputs much clearer depth around._
A4: Yes, this is possible because the proposed representation of iterative division of bins is sensitive to the object boundaries due to large depth variations in these regions.
### _Q3: In Table 7, what is the source of the faster inference time than BinsFormer?_
A5: BinsFormer contains two decoders, a pixel decoder and a transformer decoder, and frequent interactions occur between these two decoders, which may increase the inference time significantly. Although our method is iterative, the developed iterative optimizer is lightweight and operates at 1/4 resolution.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thanks for sharing your responses!
However, there are some unclear thoughts after reading the response.
> W1: Question on the number of stages (Fig. 3)
What does it mean by that ``` When the number of stages exceeds 6, the performance changes very little.```? Does it mean that the accuracy has plateaued? It would have been much great if the paper provided an analysis on trying out different numbers of iteration steps during both training and testing (even trying out different training/testing numbers, e.g., training 6 iterations, testing 8 iterations).
> Q2: In the last example (chair) in Fig. 5, the object boundary near the chair is still blurry.
I am sorry but I didn't understand the answer clearly. Could you elaborate more on why ```the proposed method is sensitive to the object boundaries due to large depth variations in those regions```?
---
Reply to Comment 1.1.1:
Title: Discussion
Comment: ### __Thank you very much for your feedback.__
### _W1: What does it mean by that when the number of stages exceeds 6, the performance changes very little.? Does it mean that the accuracy has plateaued? It would have been much great if the paper provided an analysis on trying out different numbers of iteration steps during both training and testing (even trying out different training/testing numbers, e.g., training 6 iterations, testing 8 iterations)._
A1: As suggested, we have provided the results of different stages below (the number of stages for training and inference remains the same).
|Stage Abs Rel RMSE log10 $\delta <1.25$ $\delta <1.25^2$ $\delta <1.25^3$||
|:---|:---|
|Stage1 0.093    0.333   0.041    0.921     0.991       0.998|
|Stage2 0.090    0.325   0.040    0.927     0.991       0.998|
|Stage3 0.089    0.320   0.039    0.931     0.991       0.998|
|Stage4 0.088    0.317   0.038    0.933     0.992       0.998|
|Stage5 0.087    0.315   0.038    0.935     0.992       0.998|
|Stage6 0.087    0.314   0.038    0.936     0.992       0.998|
|Stage7 0.087    0.313   0.038    0.935     0.992       0.998|
As the stage increases, the performance gradually improves until saturated, and when the number of iterations exceeds 6, the performance changes very little.
Then we try to train using 6 stages and use more stages in inference phase. The results are as follows:
|Stage Abs Rel RMSE log10 $\delta <1.25$ $\delta <1.25^2$ $\delta <1.25^3$||
|:---|:---|
|Stage6 0.087    0.314   0.038    0.936     0.992       0.998|
|Stage7 0.087    0.314   0.038    0.936     0.992       0.998|
|Stage8 0.087    0.314   0.038    0.936     0.992       0.998|
As we can see, the performance becomes plateaued when using 7 stages and 8 stages at inference phase in this case.
### _Q2: Could you elaborate more on why the proposed method is sensitive to the object boundaries due to large depth variations in those regions?_
A2: Generally, the depth variations are large at object boundaries. In these regions, due to the large depth variations, the proposed method can easily classify respective region into different depth ranges at initialization stage and further iteratively refine these depth ranges in subsequent stages. Hence, it is possible for the proposed representation to solve this blurry object boundary problem. | Summary: The paper introduces a method for monocular depth. The method uses a recurrent network based on RAFT to predict a probability distribution over a set of bins, which enables the depth at each iteration to be computed as the expectation over all bins and the margins of each bin to be adjusted using the computed variance. In the subsequent iterations, a finer-grained search is performed in the adjusted bins. This new strategy is introduced as "IEBins". The authors motivate this strategy as being robust to uncertain initial predictions.
Strengths: The method outperforms prior work across multiple standard evaluation settings.
The method is interpretable, producing confidence for different regions and steadily improving depth predictions.
To the best of my knowledge, the iterative design is substantially different than the usual approach for monocular depth.
The introduced method is very fast.
The IEBins are compared against other binning strategies in ablation experiments.
Weaknesses: No obvious weaknesses.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: I did not see discussion of limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer very much for the highly positive feedback.__ | Summary: This paper introduces a classification-regression-based monocular depth estimation pipeline.
Previous classification or classification-regression-based MDE approaches often suffer from huge complexity and loss of generalizability issue, due to the nature of requiring more depth hypothesis for better performance.
In this paper, it propose iterative elastic bins, ie, IEBins technique, which iteratively adjusts bin ranges as the prediction converges, so that with less number of bins and fast inference time, the method can still perform on the state-of-the-art level.
It additionally introduces dedicated transformer-based feature extractor and GRU-based iterative optimizer.
Strengths: This paper is well-written and easy to follow.
The proposed IEBins is an intuitive and interesting idea, and the performance also backs up its effectiveness.
Weaknesses: More in-depth analysis on 'iterative' perspective of IEBins will be appreciated.
1) For example, what happens when iteration is less or more than 6? How much does the number of iterations affect the final performance?
2) The authors claim that the proposed GRU-based iterative optimizer is also one of their contributions, saying that it is helpful to capture temporal information during IEBins-based depth estimation. Yet, there is no ablation study regarding this. Comparison between adding/excluding GRU unit, alongside its effectiveness regarding the number of iteration will help strengthening the claimed contribution.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. I have some questions regarding updating rules of the proposed IEBins.
1-1. In Eq. 7, it denotes that elastic target bin edges are modified using uncertainty from previous stage's probability distribution. Does it mean that for each pixel, bin edges are set differently? If this is the case, the network should remember separate bin ranges for every pixel, which seems like to require huge memory. Correct me if I'm wrong.
1-2. In L133, the paper says that d_min and d_max values are updated with new bin edges. In the next iteration, are 16 bins additionally set within this new min-max range? Then, does it mean that in 2nd iteration, for example, new bin range are approximately 1/256 from original min-max range?
2. In Fig 5, the range of the proposed method seems significantly off compared to GT. How is the visualization done?
3. In Tab 6, why does the performance drop when using 32 bins instead of 16? Is this aligned with overfitting issue stated in the introduction section? If it's true, than doesn't it mean that the proposed method already suffers from 32 bins while AdaBins, for example, can operate until 256 bins? How can this phenomenon be analyzed?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors did not address any limitations. Yet, other than weakness and questions I commented above, I don't see any severe limitation to this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer for the constructive feedback and comments.__
### _W1: What happens when iteration is less or more than 6? How much does the number of iterations affect the final performance?_
A1: The results of IEBins at different stages are shown below:
|Stage Abs Rel RMSE log10 $\delta < 1.25$ $\delta < {1.25^2}$ $\delta <1.25^3$||
|:---|:---|
|Stage1 0.093   0.333   0.041   0.921     0.991     0.998|
|Stage2 0.090   0.325   0.040   0.927     0.991     0.998|
|Stage3 0.089   0.320   0.039   0.931     0.991     0.998|
|Stage4 0.088   0.317   0.038   0.933     0.992     0.998|
|Stage5 0.087   0.315   0.038   0.935     0.992     0.998|
|Stage6 0.087   0.314   0.038   0.936     0.992     0.998|
|Stage7 0.087   0.313   0.038   0.935     0.992     0.998|
As the stage increases, the performance gradually improves until saturated, and when the number of iterations exceeds 6, the performance changes very little.
### _W2: Ablation study on the GRU-based iterative optimizer._
A2: Nice comment. We have presented the results at each stage after excluding GRU unit from the iterative optimizer below. For results of the whole framework, please refer to A1 to W1.
|Stage Abs Rel RMSE log10 $\delta < 1.25$ $\delta < {1.25^2}$ $\delta <1.25^3$||
|:---|:---|
|Stage1 0.093   0.334   0.041   0.920     0.991     0.998|
|Stage2 0.091   0.327   0.040   0.925     0.991     0.998|
|Stage3 0.090   0.323   0.039   0.928     0.991     0.998|
|Stage4 0.089   0.320   0.039   0.930     0.991     0.998|
|Stage5 0.088   0.318   0.039   0.931     0.992     0.998|
|Stage6 0.088   0.317   0.039   0.932     0.992     0.998|
The results verify the efficacy of the GRU-based iterative optimizer.
### _Q1.1: Confusion over bin edges._
A3: The bin edges vary between pixels besides the initialization stage. The iterative optimizer operates at 1/4 resolution in our design, making the memory consumption affordable.
### _Q1.2: Are 16 bins additionally set within this new min-max range? And in 2nd iteration, new bin range are approximately 1/256 from original min-max range?_
A4: Yes, 16 bins are set within the target bin by using the target bin as a new min-max range at each stage. For the case of target bin, the new bin range are 1/256 from original min-max range in 2nd stage. While for the case of elastic target bin, this ratio varies from pixel to pixel and image to image due to the adaptive nature introduced by elastic target bin.
### _Q2: How is the visualization done?_
A5: We scale each depth map individually here to acquire the colormap. The GT depth maps are not completely dense and have missing values in border regions, which may affect the final colormap.
### _Q3: In Tab 6, why does the performance drop when using 32 bins instead of 16? Is this aligned with overfitting issue stated in the introduction section?_
A6: This phenomenon may not be induced by the overfitting issue. As the number of bins increases, it becomes more and more difficult to classify the true optimal candidate for the next stage from a larger number of depth candidates. When the target bin classified deviates so far from the true/GT depth that the true depth also cannot be included in the elastic target bin, large depth errors will be generated in subsequent stages, thereby affecting the final performance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal.
W2. The performance boost obtained from the GRU-based optimizer is around 0.001-0.005, RMSE-wise, which is not significantly high. Also, it degrades results on earlier stages of iterations. Can authors provide more insight or arguments regarding this?
Q2. Minor note. I think it would be more effective to set the min-max normalizing values same throught a single sample. From what is given now, it is hard to know if one only preserves fine-grained detail or it actually predicts precise relative depth map.
---
Reply to Comment 1.1.1:
Title: Discussion
Comment: ### __Thank you very much for your feedback.__
### _W2: The performance boost obtained from the GRU-based optimizer is around 0.001-0.005, RMSE-wise, which is not significantly high. Also, it degrades results on earlier stages of iterations. Can authors provide more insight or arguments regarding this?_
### _W2.1: The performance boost obtained from the GRU-based optimizer is around 0.001-0.005, RMSE-wise, which is not significantly high._
A1: As we can see from the results, the advantage of the GRU-based iterative optimizer is not obvious when the number of stages is 1, which may be due to the fact that there is no historical hidden state available. As the number of stages increases, the advantage of the GRU-based iterative optimizer gradually increases and tends to be stable. The GRU unit we use in our iterative optimizer is very lightweight and only contains three separable 5x5 convolution kernels. As for the number of parameters introduced, the performance gains brought by the GRU unit is competitive.
### _W2.2: Also, it degrades results on earlier stages of iterations. Can authors provide more insight or arguments regarding this?_
A2: When the stage is 7, the depth candidates are highly close. In this case, it is very difficult for the iterative optimizer to classify the true optimal depth candidate and it is easy to make mistakes, which may cause slight performance degradation.
### _Q2: Minor note. I think it would be more effective to set the min-max normalizing values same throught a single sample. From what is given now, it is hard to know if one only preserves fine-grained detail or it actually predicts precise relative depth map._
A3: We will revise our paper according to this nice advice. | Summary: The paper tackles the task of monocular depth estimation which is of fundamental importance in computer vision and has many downstream applications. Several recent works use bin-based approaches and follow the adaptive binning framework where the distribution of bin centers on the depth interval (that are treated as depth candidates) can vary per image or per-pixel.
This paper proposes a novel approach to the adaptive binning framework called IEBins. Instead of working with only one (potentially large) set of bins (per image or per pixel), IEBins refines the binning structure in an iterative manner. Starting with a coarse uniform division of the original depth interval (dmin, dmax), the idea is to recursively find and uniformly divide the ‘target’ bin by using the ’target’ bin as the new target depth interval for the next step. In addition, the target bin is made elastic (the new target depth interval’s ends are allowed to change) based on the uncertainty estimate.
The work achieves state-of-the-art results on popular benchmarks including KITTI and NYU-Depth-v2 and the authors promise to release the code and models publicly.
Strengths: * State-of-the-art results. The work achieves SOTA in a highly competitive space of monocular depth estimation with over 7% improvement in RMSE over the prior SOTA. IEBins idea is also validated and shows about 2% improvement to adaptive bins baseline in fair settings.
* Interesting idea. The idea of recursive division of bins is obvious in retrospect yet creative. Elastic nature of bins based on uncertainty also makes sense and subtly introduces the ‘adaptive’ nature of bins.
* Well exploited ideas and good architecture design. Authors have well exploited the ideas from other works to obtain their goal. e.g. Iterative refinement using GRU, developed in separate field of optical flow estimation, has been introduced here in a well designed architecture.
* Potential to be foundational. Recently, depth estimation has seen foundational ideas like adaptive binning that serve as a starting point for a series of several follow up works. I believe the iterative refinement of bins can prove to be as influential.
* Good to see ablations on all bin types in fair settings.
Weaknesses: * W1 **Incomplete literature review**. Authors have missed some important published works that are highly relevant to this work:
* [a] - LocalBins (ECCV’22) also introduces the idea of “splitting” the bins and step-wise refinement of bins in a coarse-to-fine manner. IEBins and LocalBins are highly related and pursue the same goal but follow different approaches. Quantitative comparison with [a] and providing the insights to differences and similarities is highly suggested.
* [b] - PixelFormer (WACV’23) is also based on the adaptive binning framework and introduces layer-wise refinement of “pixel queries”.
* W2 **Absence of quantitative evidence towards the working of the fundamental idea**. According to the proposed idea of iteratively making the ‘target’ depth interval smaller, the bin-width (or its median across an image) should exponentially decrease stage-by-stage, with some flexibility introduced by the elastic nature. At the same time, if the bins are too elastic i.e. final bin-widths are comparable to initial bin-widths (e.g. only 50% smaller than previous or so), then the iterative restrictive nature of depth search is invalidated. Although authors visualise the refinement process in Fig. 3 but these results can still be explained by ‘too elastic’ bins as the elastic-bin-adjustment varies spatially. Also, are the uncertainty 'heat maps' scaled individually or globally? If individually (which seems to be the case), then the uncertainty visualizations only show that uncertainty tends to take much higher values near edges, rather than an overall absolute decrease and therefore tell nothing about the bin widths. The authors are suggested to provide evidence to show that method actually works as described to avoid misleading conclusions. For example, via a simple line plot of evolution of target bin-width (multiple lines for pixels across a row or single line plot of median elastic bin width across image etc) through the course of refinement, or any other form that authors deem suitable that delivers the evidence clearly.
* W3 **Unsubstantiated claim at L44**. Authors claim that large number of bins are undesirable and use this as the main motivation for the work. Authors state at L43-44 that “Learning ambiguity grows rapidly as the number of bins increases” but fail to provide any explanation, reference or evidence. As this information can be potentially misleading, authors should either remove the claims or provide an explanation, reference or evidence towards the claim.
[a] Bhat, Shariq Farooq, Ibraheem Alhashim, and Peter Wonka. "Localbins: Improving depth estimation by learning local distributions." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[b] Agarwal, Ashutosh, and Chetan Arora. "Attention attention everywhere: Monocular depth prediction with skip attention." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * How do the bin-widths actually evolve during the iterative refinement? What is the mean of elasticity factor (new adjusted width divided by the bin-width at that stage with no elasticity)?
* What is the maximum bin-width in meters reached at the final stage e.g. for the NYU test set? This can also answer the elasticity to some extent.
* Why is a large number of bins undesirable? What is the evidence?
* How does IEBins compare with LocalBins?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors do not explicitly reflect upon the limitations or scope of the work. Refer to "weaknesses" for major concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer for the constructive feedback and comments.__
### _W1&Q4: Incomplete literature review. Comparison with LocalBins._
A1: We will add these two interesting and relevant works to our revised version.
Similarities with LocalBins: Both IEBins and LocalBins use a multi-stage fashion to refine the binning structure.
Differences between them: At each stage, LocalBins divides all bins from the previous stage, and when the stage increases, the number of bins increases, while IEBins locates and divides the target bin only, and the number of bins is not changed at different stages. LocalBins refines the binning structure on multiple resolutions, while IEBins on the same resolution.
Quantitative comparison with LocalBins: We follow the experimental settings in Table 5 and the results of Baseline + LocalBins on Abs Rel, RMSE, log10, $\delta < 1.25$, $\delta < {1.25^2}$, $\delta < {1.25^3}$ are 0.090, 0.319, 0.038, 0.932, 0.992, 0.998 on the NYU-Depth-v2 dataset, which are worse than those of Baseline + IEBins (0.087, 0.314, 0.038, 0.936, 0.992, and 0.998).
### _W2&Q1&Q2: Quantitative evidence towards the working of the fundamental idea, elasticity factor and maximum bin-width in meters reached at the final stage._
A2: Nice comments. We randomly choose a sample from the NYU-Depth-v2 test set, and show the median elastic target bin width across the image for each stage, and corresponding uncertainty values and elasticity factors. We note that 9.9 is the original range size ($d_{max}$ 10 - $d_{min}$ 0.1) .
|         Stage1 Stage2 Stage3 Stage4 Stage5 Stage6||
|:---|:---|
|Width (median)     9.9     2.561     0.756    0.228    0.073    0.024|
|Uncertainty (std)     -      0.971    0.281    0.087    0.029    0.010|
|Elasticity factor     -      4.139    3.917    4.222    4.562    4.800|
It can be seen that as the stage increases, the elastic target bin widths and uncertainty values continue to decrease. The elasticity factors are between 3.9 and 4.8.
The maximum bin widths of elastic target bin and newly divided bins for final stage are 0.046m and 0.0029m (0.046/16).
### _W3&Q3: Unsubstantiated claim at L44. Why is a large number of bins undesirable? What is the evidence?_
A3: As presented in [1], the depth candidate corresponding to the peak point of the probabilistic distribution (or complementary cost distribution) may not be the true optimal depth candidate. However, the desired depth prediction can still be obtained after a linear combination of the probabilistic distribution and the depth candidates. In other words, there are many linear combinations for a set of depth candidates that can yield the desired depth prediction. When the number of bins increases, the combination between the probabilistic distribution and the depth candidates becomes more and more complex. Hence, we point out that ``this learning ambiguity grows rapidly as the number of bins increases''. Intuitively, it is much easier to classify the optimal candidate from a small set of depth candidates than from a large set of depth candidates. Hence, we choose small number of bins. To avoid potentially misleading, we will rephrase or remove this claim in the revision.
[1]. Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching, AAAI2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply! Your response answered my queries.
From the table you shared, it is evident that the idea works. Although the response to W3 makes sense, it is highly contextual and not an established fact e.g. AdaBins performs best at 256 bins (>> nbins in IEBins). So it is still suggested to rephrase or remove.
LocalBins comes very close (equal in 3 out of 6 metrics) and makes the improvements weaker.
The visualization can be better in all figures. The results in all cases should be normalized uniformly (e.g. use a dataset global min (e.g. 0.1 meters) and max depth (e.g. 10 meters) for depth maps, and not individual basis). This includes the uncertainty visualization as commented above.
In summary, I believe the overall design of the proposed framework is quite nice and extendable. There are missing parts and weaknesses but the design and performance outweigh them. The paper is worth the acceptance.
---
Reply to Comment 1.1.1:
Comment: ### __Thank you very much for your feedback and encouraging words.__
We will revise our paper accordlingly. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes iterative elastic bins (IEBins), a multi-stage coarse-to-fine method for monocular depth estimation (MDE). It progressively searches the target depth bin on top of the previous step. To reduce the error accumulation in the iterations, an elastic bin is proposed whose width is adjusted based on the depth prediction uncertainty. Experiments on the KITTI, NYU-v2, and SUN RGB-D datasets show improved performances on MDE.
Strengths: - The depth bin is adjusted elastically based on the depth prediction uncertainty.
Weaknesses: - The contributions are not enough to my knowledge. I think only the elastic bins are new. The iterative manner for depth estimation by GRUs has been proposed in RAFT[28] and adopted in many follow-up works, e.g., RAFT-Stereo[29] and Itermvs[30]. The depth bins as shown in Eq 1,2,3&4 have been investigated in Adabins [3]. As for the framework of the feature extractor and GRU-based layers, standard implementations are involved. The multi-stage depth estimation pattern was also seen in previous works, e.g., CasMVS [Gu et al, cvpr2020] and IterMVS[30].
- Some minor ones:
- line 141: a RGB --> an RGB
- line 151: during iteration --> during iterations
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Have other depth uncertainty presentations been investigated or compared with the variance of the probabilistic distribution used in Eq 6?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: - N/A for the limitations.
- I suggest the authors provide the failure cases (and the corresponding explanations) of the proposed IEBins, w.r.t. for example, the number of bins and the elastic width of bins.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer for the constructive feedback and comments.__
### _W1: The contributions are not enough to my knowledge._
A1: Apologies for not stating the contributions clearly. In this work, we introduce a novel iterative elastic bins (IEBins) strategy for monocular depth estimation. Instead of using only one set of bins like AdaBins [3], the IEBins refines binning structure in an iterative manner. The initialization stage makes a coarse uniform discretization of the full depth range and each subsequent stage iteratively locates and uniformly discretizes the target bin by using the target bin as the new depth range. While previous works CasMVS [Gu et al, cvpr2020] and IterMVS [30] also adopt a multi-stage fashion to estimate depth, they acquire a new depth range by centering on the current depth prediction and empirically setting the range size. In addition, the depth interval size of different stages at the same level in IterMVS [30] remains the same, while the bin width of our target bin is gradually reduced for a finer-grained depth search. To cope with the possible error accumulation during iterations, we further make the target bin elastic based on the depth uncertainty. To instantiate the IEBins, we design the network architecture by borrowing ideas from other fields such as optical flow estimation (RAFT [28]). Last but not least, the IEBins ranks 2nd and outperforms all previously published methods on the KITTI benchmark leaderboard at the submission time.
### _W2: Some minor ones: a RGB --> an RGB, during iteration --> during iterations._
A2: We will fix these grammatical errors in the revision.
### _Q1: Have other depth uncertainty presentations been investigated or compared with the variance of the probabilistic distribution used in Eq 6?_
A3: We have experimented with the predictive uncertainty [1] as well as using variance directly in our framework and find that using standard deviation of the probabilistic distribution works better.
[1] . Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints, CVPR 2019.
### _L1: I suggest the authors provide failure cases (and the corresponding explanations) of IEBins._
A4: We will revise our paper according to this nice advice. | Summary: The paper proposes an iterative elastic bins (IEBins) strategy for monocular depth estimation. The IEBins use a small number of bins adaptively at each iteration. It use a GRU to predict the depth distribution at each stage. The authors conduct experiments on 3 commonly used datasets and the proposed method shows better results than previous methods.
Strengths: The paper is well written and easy to follow.
The idea of using iterative update is interesting.
The experiments and ablation study are thorough and validate the proposed method's effectiveness.
Weaknesses: 1. The motivation of iterative updates is not very clear. RAFT used iterative updates because at each iteration, a new cost volume can be constructed based on current prediction. But I don't see where the new information comes from in MDE.
2. Based on 1. Maybe the improved performance come from the depth candidates projects since the model knows what the depth is. Would be interesting to see how the model performance by using a positional encoding of depth candidates to other non-iterative methods, such as adabins.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: L44. "This learning ambiguity grows rapidly as the number of bins increases, exacerbating the difficulty of model convergence and prone to overfitting." Why the ambiguity grows in this case? And why the proposed iterative method doesn't lead to increased ambiguity?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: 1. Currently only the depth maps are shown. It would be great to compare the point cloud with other methods since it's much easier to see the 3D structure in point cloud.
2. How is the improved depth estimation benefits in downstream task (compared to other MDE methods)? For example, could it be used for SLAM or 3D reconstruction from multiview images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### __We thank our reviewer for the constructive feedback and comments.__
### _W1: The motivation of iterative updates is not very clear. I don't see where the new information comes from in MDE._
A1: The IEBins embodies the idea of iterative division of bins. Each stage first divides the elastic target bin (the bin in which the current depth prediction is located) and then feeds the updated depth candidates into the iterative optimizer as new information for finer-grained depth search.
### _W2: Would be interesting to see how the model performance by using a positional encoding of depth candidates to other non-iterative methods, such as adabins._
A2: Nice comment. We have verified the model performance by using a positional encoding of depth candidates to AdaBins while there is little change in performance.
### _Q1: Why the ambiguity grows in this case? And why the proposed method doesn't lead to increased ambiguity?_
A3: As presented in [1], the depth candidate corresponding to the peak point of the probabilistic distribution (or complementary cost distribution) may not be the true optimal depth candidate. However, the desired depth prediction can still be obtained after a linear combination of the probabilistic distribution and the depth candidates. In other words, there are many linear combinations for a set of depth candidates that can yield the desired depth prediction. When the number of bins increases, the combination between the probabilistic distribution and the depth candidates becomes more and more complex. Hence, we point out that ``this learning ambiguity grows rapidly as the number of bins increases”. The proposed IEBins uses multiple small number of bins, instead of one large standard number of bins. To avoid potentially misleading, we will rephrase or remove this claim in the revision.
[1]. Adaptive Unimodal Cost Volume Filtering for Deep Stereo Matching, AAAI 2020.
### _L1: It would be great to compare the point cloud with other methods._
A4: We have shown point cloud comparison in Figs. 3 and 4 of supplementary material.
### _L2:How is the improved depth estimation benefits in downstream task (compared to other MDE methods)?_
A5: We will add necessary discussion on our future works according to this nice advice in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I think the MDE metrics are kind of saturated and would be great to show the benefits of improvements in downstream task such as slam or 3D reconstruction.
---
Reply to Comment 1.1.1:
Comment: ### __Thank you very much for your feedback.__
### _L2: I think the MDE metrics are kind of saturated and would be great to show the benefits of improvements in downstream task such as slam or 3D reconstruction._
A1: As suggested, we integrate IEBins and NeWCRFs [6] into ORB-SLAM2 [1] in the RGB-D setting and evaluate the visual odometry performance on the KITTI odometry dataset. We report results on keyframes (selected by the ORB-SLAM2) and on all frames of sequences 01-10. The ATE (m) metric is used. ''key'' and ''all'' stand for keyframes and all frames, respectively.
|Seq IEBins (key) NeWCRFs (key) IEBins (all) NeWCRFs (all)||
|:---|:---|
|01      117.06          536.53           125.09        583.20|
|02      12.22           13.32            13.59         13.97|
|03      6.72            8.31             7.15          9.04|
|04      16.70           31.56            16.61         30.59|
|05      8.10            8.05             7.56          7.86|
|06      1.32            0.96             1.35          0.95|
|07      2.48            3.09             2.55          3.24|
|08      10.89           9.82            11.06         9.90|
|09      5.44            7.61             5.68          7.67|
|10      7.21            11.73             8.24         12.66|
As we can see, our IEBins either significantly exceeds the NeWCRFs or achieves on par performance with the latter.
[1] ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras, IEEE Transactions on Robotics, 2017
We further evaluate IEBins and NeWCRFs [6] on the single-view reconstruction task. The RMSE (m) metric is used.
|Method RMSE (NYU) RMSE (KITTI) ||
|:---|:---|
|IEBins      0.195          1.481|
|NeWCRFs 0.205          1.526|
As we can see, our IEBins exceeds NeWCRFs by 4.9% and 2.9% on NYU and KITTI datasets, respectively. | null | null | null | null |
Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning | Reject | Summary: The paper studies offline RL with non-linear function approximation. The paper is mainly motivated as existing sample complexity guarantees on offline RL algorithms with general function approximation yield suboptimal dependency on the function class complexity, e.g. when the bounds are translated to the linear case. The paper proposes an oracle-efficient algorithm that achieves minimax optimal problem-dependent regret when the bounds are specialized to the linear case. The paper also introduces a new coverage definition.
Strengths: - The paper appears to be technically sound with some new ideas in the algorithm design and formulation of dataset coverage.
- The approach achieve minimax optimal rate in non-linear function approximation, when bounds are converted to linear.
Weaknesses: - The main weakness is that the proposed approach either requires uniform coverage or non-linear bonus oracle. The non-linear bonus oracle is a strong requirement and in effect, simply removes the difficulties related to pessimism in offline RL. On the other hand, the uniform coverage assumption is too strong and thus, it is unfair to compare its efficiency to pessimistic offline RL algorithms.
- A clear comparison to prior work is not presented. In particular, there are multiple axes of comparison, such as dependency on $\epsilon$, dependency on function classes, data coverage requirement, type of oracle, computational efficiency/tractability, realizability assumptions, etc. It is difficult to clearly evaluate the results in this paper without such comparisons. For instance, it will be helpful to have a table as well as translating the bounds of the other algorithms into linear case to see in detail. Additionally, there are several pessimistic offline RL algorithms with general function approximation that only require optimization oracles instead of the more difficult bonus oracle, and no comparison with those papers are presented:
Cheng et al. Adversarially trained actor critic for offline reinforcement learning. In International Conference on Machine Learning (pp. 3852-3878). PMLR
Rashidinejad et al. "Optimal conservative offline rl with general function approximation via augmented lagrangian." arXiv preprint arXiv:2211.00716 (2022).
Ozdaglar et al. Revisiting the Linear-Programming Framework for Offline RL with General Function Approximation. arXiv preprint arXiv:2212.13861
Zhu et al. Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning. arXiv preprint arXiv:2301.12714.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is the suboptimality of dependency on the function-class complexity in the bounds of prior algorithms with general function approximation inherent to the algorithm design or a byproduct of analysis?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns and questions point-by-point.
**Q1**: Non-linear bonus oracle is a strong requirement. It removes the difficulties related to pessimism in offline RL
**A1**:We would like to clarify that our method does not transfer the primary computational burden to the oracle for identifying the appropriate bonus function. Indeed, in Appendix C of Agarwal et al. (2023), and in Algorithm 2 of Li et al. (2023), they presented an efficient approach to executing this bonus oracle. Therefore, we can provide an efficient implementation of our bonus oracle.
----
**Q2**: Unfair to compare the efficiency because the assumption is strong and comparison to more prior works.
**A2**: Thank you for raising the issue of fairness in our comparison due to the strength of our assumptions and the need for a broader review of prior works. We will enhance our discussion by making a more extensive comparison. This will include aspects such as the type of algorithm used, the data coverage assumptions made, and the type of oracle utilized. In particular, we have made the following table to facilitate the comparison.
|Algorithm|Algorithm Type |Function Classes|Data Coverage| Types of Oracle |Regret Type |
|--------------|-----------------------|--------------------|-------------------|--------|-----|
| Xie et al. (2021)|Bellman-consistent Pessimism| General|Partial|Optimization on Policy and Function Class| Worst-case
|CPPO-TV Uehera and Sun (2021)|MLE| General| Partial| Optimization on Policy and Hypothesis Class|Worst-case
|CORAL Rashidinejad et al. (2022)| MLE|General|Partial|Optimization on Policy and Function Class|Worst-case
|Reformulated LP Ozdaglar et al. (2023)| Linear Program|General|Partial| Linear Programming|Worst-case|
| ATAC Cheng et al. (2022) | Actor Critic | General | Partial |No-regret Policy Optimization & Optimization on the Function class| Worst-case
|A-Crab Zhu et al. (2023)| Actor Critic | General | Partial |No-regret Policy Optimization & Optimization on the Function class|Worst-case
| LinPEVI-ADV+ Xiong et al, (2022) | LSVI-type | Linear | Uniform | \ | Instance-dependent |
|PFQL Yin et al, (2022)|LSVI-type|Differentible|Uniform| Gradient Oracle|Instance-dependent|
|PNLSVI (Our work)|LSVI-type| General| Uniform| Bonus Oracle& Optimization on the Function class|Instance-dependent
----
**Q3**: Is the suboptimality of dependency on the function-class complexity in the bounds of prior algorithms with general function approximation inherent to the algorithm design or a byproduct of analysis?
**A3**:We believe that it is inherent to algorithm design. Without using the variance information of the value function and the Bernstein-type concentration inequality to construct the bonus/confidence set, we don’t think they can achieve optimal dependency on the function-class complexity.
----
[1] Agarwal et al. (2023). Vo q l: Towards optimal regret in model-free rl with nonlinear function approximation. COLT
[2] Li, et al. (2023) Low-switching policy gradient with exploration via online sensitivity sampling. arXiv preprint
[3] Xie et al. (2021). Bellman-consistent pessimism for offline reinforcement learning. In NeurIPS
[4] Uehara and Sun (2021). Pessimistic model-based offline reinforcement learning under partial coverage. In ICLR.
[5] Rashidinejad et al. (2022). Optimal conservative offline rl with general function approximation via augmented lagrangian. In ICLR.
[6] Ozdaglar et al. (2023). Revisiting the linear-programming framework for offline rl with general function approximation. In ICML.
[7] Cheng et al.(2022). Adversarially trained actor critic for offline reinforcement learning. In ICML
[8] Zhu et al.(2023). Importance weighted actor-critic for optimal conservative offline reinforcement learning. arXiv preprint
[9] Xiong et al. (2023). Nearly minimax optimal offline reinforcement learning with linear function approximation: Single-agent mdp and markov game. In ICLR
[10] Yin et al.(2022). Offline reinforcement learning with differentiable function approximation is provably efficient. In ICLR | Summary: This paper considers variance-weighted least-squared regression for offline RL with general function approximation. Under a uniform data coverage assumption, they show that the proposed algorithm obtains a sub-optimality bound that scales with the $D^2$-divergence of the offline data set, the positive lower-bounded constant of the uniform data coverage, and the complexity of the function class. Their bound obtains the right order when realized in the linear case.
Strengths: - clear presentation (though some parts can be improved further -- see Weaknesses)
- the obtained result is new and relevant to the offline RL community
Weaknesses: - The main weakness is that the uniform data coverage assumption is very strong. In the linear case, this assumption is equivalent to that the behavior policy is exploratory overall dimensions of the linear feature. A question for the authors is that in such a case, why would we even need pessimism? Pessimism is used when the data coverage is partial thus we become pessimistic about uncertain actions. But when the coverage is uniform, it can eliminate the need for pessimism and we can simply use greedy algorithms. I understand that without such a uniform data coverage assumption, it seems difficult to get a reliable estimation of the variance of the transition kernel and it would be interesting to get rid of this assumption. But if we could not get rid of it yet, the very least expectation is that we need to explain this assumption further, especially regarding where pessimism is really needed with this assumption.
- Writing can be improved further. For example, the $D^2$-divergence and the definition of the bonus function (Def 4) can be explained and motivated further. The current presentation of these concepts are not very helpful
- Some claims might be potentially misleading. It's not comfortable to view the proposed algorithm as computationally efficient even in the oracle sense. Specifically, the construction of the bonus function in Definition 4.1 is far from being computationally efficient since it is essentially a constrained optimization over the version space. That said, it is nowhere more computationally efficient than version-space-based algorithms such as the "Bellman-consistent pessimism" of Xie et al.
- Though the main result is new, it appears expected given the already-developed machinery in Argawal et al. 2022 and Xiong et al. 2022. What are the technical challenges in the current problem that the existing techniques cannot resolve?
- Some minor: PNLSVI is never introduced before used
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the Weaknesses section
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns and questions point-by-point.
**Q1**: Uniform data coverage assumption is very strong, in such a case, why would we even need pessimism?
**A1**: We agree that pessimism does not require the uniform data coverage assumption. However, since we want to leverage the variance information of the value function, we made this assumption. Note that our assumption can be reduced to that in Xiong et al. (2023) and Yin et al.(2023) in the linear function approximation setting, because they also need to leverage the variance information. How to relax this assumption to partial coverage or single concentrability assumption will be an interesting work. We will study it in the future.
----
**Q2**: The $D^2$-divergence and the definition of the bonus function (Def 4) can be explained and motivated further.
**A2**: Thank you for your suggestion. In our revision, we will provide a detailed explanation as follows.
The $D^2$ divergence, defined as $D_{\mathcal F_h}^2(z;\mathcal D_h; \sigma_h) = \sup_{f_1,f_2 \in \mathcal F_h}\frac{(f_1(z)-f_2(z))^2}{\sum_{k \in [K]}\frac{1}{(\sigma_h(z_h^k))^2}(f_1(z_h^k)-f_2(z_h^k))^2 + \lambda}$, is a measure introduced to quantify the disparity of a given point $z=(s,a)$ from the historical dataset $\mathcal D_h$. It signifies the extent to which the behavior of functions within the function class can deviate at the point $z=(s,a)$, based on their difference in the historical dataset. It can be viewed as the generalization of elliptical norm $\|\phi(s,a)\|_ {\Sigma_h^{-1}}$ in linear case, where $\Sigma_h$ is defined as $\sum_{k \in [K]}\phi(s_h^k,a_h^k)\phi(s_h^k,a_h^k)^\top + \lambda \mathbf{I}$.
By employing the pessimism principle, we hope to design our policy based on the worst-case guarantee on its expected return. One significant challenge arises when trying to prevent overestimation. This necessitates us to quantify functional uncertainty, denoted by $\max_{f_1,f_2 \in \mathcal{F}}|f_1(s_h,a_h)-f_2(s_h,a_h)|$. But the inherent complexity of this uncertainty bonus is huge, concerning an optimization of functions. Therefore, we assume the existence of a bonus oracle with reduced complexity (Def 4.1). This bonus oracle can indeed be efficiently implemented using the method proposed in Appendix C of Agarwal et al. (2023) and Algorithm 2 of Li et al. (2023).
----
**Q3**: Computationally efficiency: bonus function is not efficient.
**A3**: In line 55, we said "Our algorithm is oracle-efficient, i.e., it is computationally efficient when there exists an efficient regression oracle and bonus oracle for the function class".
We would like to clarify that the bonus oracle can indeed be efficiently implemented using the method proposed in Appendix C of Agarwal et al. (2023) and Algorithm 2 of Li et al. (2023). They offer an effective and efficient algorithm for implementing the bonus oracle.
Nevertheless, we acknowledge your concerns and are open to modifying our statement for the sake of clarity. We propose revising it to state, "Our algorithm is computationally tractable given the bonus oracle”.
----
**Q4**: PNLSVI is never introduced before used
**A4**: Its full name is Pessimistic Nonlinear Least-Square Value Iteration (PNLSVI). We will provide its full name in the abstract and the introduction.
----
[1] Agarwal et al. (2023). Vo q l: Towards optimal regret in model-free rl with nonlinear function approximation. In COLT
[2] Li et al. Low-switching policy gradient with exploration via online sensitivity sampling. arXiv preprint
[3] Xiong et al. (2023). Nearly minimax optimal offline reinforcement learning with linear function approximation: Single-agent mdp and markov game. In ICLR
[4] Yin et al. (2022). Offline reinforcement learning with differentiable function approximation is provably efficient. In ICLR
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I would like to keep my initial evaluation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. | Summary: This paper proposes a pessimistic nonlinear least-squares value iteration algorithm to tackle the offline reinforcement learning problem. The main motivation of the paper is to propose an algorithm that are both computationally efficient and minimax optimal w.r.t. the complexity of nonlinear function class. The proposed pessimism-based algorithm strictly generalizes the existing pessimism-based algorithms for both linear and differentiable function approximation and is oracle efficient. Also, the proposed algorithm is proven to be optimal w.r.t. the function class complexity, closing the gap originated from the previous work on differentiable function approximation.
Strengths: 1) The proposed algorithm is proven to be optimal w.r.t. the complexity of nonlinear function class, closing the gap from the previous work on the differentiable function class and generalizes it to the wider nonlinear function class.
2) The proposed algorithm is computationally efficient if there exist the efficient oracles for both regression minimization and bonus function optimization/searching.
Weaknesses: 1) The paper's presentation needs some work. For example, the terminology definition is not consistent. The D^2 divergence definition in Definition 3.2 is not consistent with the later terminology of D_F in line 239. The language itself needs some work too. For example, lots of places where it needs 'an', but 'a' is used and vice versa. Please define RL before using it in the abstract. There are also some ambiguities in the definitions that needs clarification in the Question section.
2) The paper's claimed contribution is a bit exaggerated. Although the proposed algorithm does not need the computationally heavy optimization as previous works in planning phase, it transfers the main computation burden to the Oracle to find the satisfied bonus function, which seems to be a very time-consuming task. It also applies to the claim of being the first statistically optimal algorithm for nonlinear offline RL. Being able to get optimal result in the reduced linear function class does not necessarily mean it's optimal in the broader nonlinear class.
3) Although the considered class is the nonlinear one and general than the previously considered linear or differentiable class, the techniques used in the analysis are nothing new in my opinion, except re-defining the metrics in the nonlinear function class and connect the results together along with additional assumptions.
Overall, I think the paper is well motivated, but given the presentation and the insignificant contribution, it's not ready to be published.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: 1) In line 137, the paper defines the Bellman operator for the function f: S->R. Then why in the operator, it takes both state and action as input?
2) In line 141, should we replace the V with Q?
3) In line 172, the generalized definition to offline setting, If D_h corresponds to the observations collected up to stage h in the MDP, what is z_h^k? Is it only one (state,action) pair or the observations collected up to stage h?
4) Line 214, should the function be all \hat{f^{\prime}}?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns and questions point-by-point.
**Q1**: The presentation needs some work.
**A1**: Thanks for your feedback. We will address these problems one by one.
- The consistency of the $D^2$ definition:
Our current definition in Section 3.2 pertains to the online setting, whereas in line 172, we present the $D^2$ definition suited to the offline setting. This definition is intended to be consistent with the notation we adopt later in the paper. To emphasize our new definition, we will clarify it in the revision, and we will also include a remark comparing it with the online setting to ensure a comprehensive understanding. We apologize for any confusion our initial presentation may have caused and thank you for pointing this out.
- Writing errors and notation issues:
We will thoroughly proofread the entire document to rectify any grammatical mistakes and improve the clarity of the text.
Regarding your comment on line 172, we introduced the shorthand notation where $z = (s, a)$ and $z_h^k = (s_h^k,a_h^k)$. We will define this notation in the revision.
Further, we are grateful for your observation on line 214. It appears to be a typo and we will promptly correct it.
- Potential ambiguity of the use of the Bellman operator:
We define the Bellman operator on line 137, where it is intended to accept both state and action as inputs according to our definition. To mitigate any possible ambiguity, we will incorporate brackets [] to explicitly denote that the operator is being applied to a function f which takes only state as inputs and then get a function taking state and action as input, i.e., $\[ \mathcal T_h f \] (s_h,a_h)$. $= E_{s_{h+1} \sim \mathbb P_h (\cdot|s_h,a_h)}$ $[r_h(s_h,a_h) + f(s_{h+1})]$.
In relation to line 141, the function $V$ is used in place of $f$ from the definition. Therefore, upon application of the Bellman operator, it is intended to take both state and action as inputs. We will also use $\[ \mathcal{T}_h V_{h+1} ^{\pi}\](s_h,a_h)$ to avoid confusion.
----
**Q2**: contribution is exaggerated
**A2**: We will revise this part to present a more accurate representation without making excessive claims.
We appreciate your keen observation that the optimal result in the reduced linear function class does not necessarily imply optimality in the broader nonlinear class. Yet this can still indicate that our data-dependent regret bound is reasonably tight, especially the dependency on the function-class complexity.
We would also like to clarify that our algorithm does not transfer the primary computational burden to the oracle for identifying the appropriate bonus function. Indeed, in Appendix C of Agarwal et al. (2023), and in Algorithm 2 of Li et al. (2023), they presented an efficient approach to executing this bonus oracle. Therefore, we can provide an efficient implementation of our bonus oracle. So our algorithm is indeed computationally-efficient.
----
[1] Agarwal et al. (2023). Vo q l: Towards optimal regret in model-free rl with nonlinear function approximation. In COLT
[2] Li et al. (2023) Low-switching policy gradient with exploration via online sensitivity sampling. arXiv preprint
---
Rebuttal Comment 1.1:
Comment: Thank you for taking time to reply my review. I did check the Algorithm 2 of Li et al. (2023) mentioned in the rebuttal, but the computation concern is still there. As a result, I will keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer gXdR,
Thank you for your reply. In order to fully understand and address your concern and improve our current work, we'd like to kindly request further clarification. Could you please explain what the computational burden is in this algorithm? In our opinion, this is no less efficient than the regression oracle used in other papers (i.e., the computational overhead for both the regression oracle and the bonus oracle are similar). Thanks.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: Dear reviewers,
Based on the questions of reviewer gXdR and reviewer 17B8 on technical challenges, we would like to emphasize the difficulty of this problem being studied and our novel techniques to tackle it. Firstly, the variance information in the context of Least-Squares Value Iteration (LSVI) type algorithms is crucial to obtain better statistical efficiency. Under linear function approximation, the value function is always a linear function with respect to the feature mapping $\phi$, and it is straightforward to approximate the value function by directly estimating the underlying linear parameter $\theta$. However, the situation becomes more complex with nonlinear function approximation since the value function is no longer a linear function. In the nonlinear function approximation in offline RL, we propose novel constructions and concentration inequalities. In detail, we approximate the value function and the square of the value function by nonlinear least-squares regression
$\tilde f_h' = \underset{f_h \in \mathcal F_h}{\operatorname{argmin}} \sum_{k \in [K]} (f_h(\bar s_h^k,\bar a_h^k)-\bar r_h^k - \hat f_{h+1}'(\bar s_{h+1}^{k}))^2$ and
$ \tilde g_h'= \underset{g_h \in \mathcal F_h}{\operatorname{argmin}} $$ \sum_{k \in [K]} (g_h(\bar s_h^k,\bar a_h^k)-(\bar r_h^{k} + \hat f_{h+1}'(\bar s_{h+1}^{k}))^2)^2$ (Algorithm 1, Lines 3-4).
We construct the following confidence interval for nonlinear function approximation
$\sum_{k \in [K]}(\bar f_h'(\bar z_h^k) - \tilde f_h' (\bar z_h^k))^2 \leq (\beta_{1,h}')^2$ and $\sum_{k \in [K]}\left(\bar g_h'(\bar z_h^k) - \tilde g_h' (\bar z_h^k)\right)^2 \leq (\beta_{2,h}')^2$ (Lemmas 6.1 and 6.2). With the help of the concentration properties, we could estimate the variance information and further provide a variance-dependent estimation for the value function $\tilde f_h = \underset{f_h \in \mathcal F_h}{\operatorname{argmin}} \sum_{k \in [K]} \frac{1}{\hat \sigma_h^2(s_h^k,a_h^k)}(f_h(s_h^k,a_h^k)-r_h^k - \hat f_{h+1}(s_{h+1}^k))^2$ (Algorithm 1, Line 10).
Additionally, we address the issue of reference-advantage decomposition. This method is an effective tool for overcoming the challenge posed by the additional error from uniform concentration over the entire function class $F_h$. Xiong et al. (2023) employed an estimate of the Bellman operator for decomposition, but in the nonlinear setting, a direct counterpart does not exist. Prior works, such as Yin et al. (2022), were unsuccessful in adapting this technique to the nonlinear function class, resulting in a suboptimal dependency on the complexity of the function class. In comparison, we decompose the bellman error to reference uncertainty $r_h(s,a) + f_{h+1}^*(s,a) - \[\mathcal T_h f_{h+1}^*\](s,a)$ and Advantage uncertainty $\hat f_{h+1}(s,a) - f_{h+1}^*(s,a) -( \[\mathbb P_h \hat f_{h+1}](s,a) - \[\mathbb P_h f_{h+1}^*\](s,a))$ (Line 313). For the reference uncertainty, the optimal value function $f^*_{h+1}$ is fixed and not related to the pre-collected dataset, which circumvents additional uniform concentration over the whole function class and avoids the dependence on the function class size. For the advantage uncertainty, it is worth noticing that the distance between the estimated function $\hat{f}'_{h+1}$ and the optimal value function $f_h^*$ is decreased as $O(1/\sqrt{K\kappa})$. Though, we still need to maintain the uniform convergence guarantee, the advantage uncertainty is dominated by the reference uncertainty when the number of episodes $K$ is large enough. (Line 315-318)
To our knowledge, we are the first to utilize this method in a nonlinear function approximation and prove an optimal dependency on the complexity of the function class when it is specialized to the linear case.
----
Additionally, based on the feedback of reviewer vaFF, we have made a table to provide a comprehensive comparison of different algorithms. Please find the table in the uploaded pdf file.
----
[1] Xiong et al. (2023). Nearly minimax optimal offline reinforcement learning with linear function approximation: Single-agent mdp and markov game. In ICLR
[2] Yin et al. (2022). Offline reinforcement learning with differentiable function approximation is provably efficient. In ICLR
Pdf: /pdf/3c1b92764c9e0d7f22595a84634937f7fdac9113.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Fast Partitioned Learned Bloom Filter | Accept (poster) | Summary: The authors propose two methods to reduce the construction time of the partitioned learned Bloom filter (PLBF):
1. Fast PLBF, which can construct the same data structure as PLBF but with a smaller time complexity of O(N^2k) and 2. Fast PLBF++, which can construct the data structure with a time complexity of O(Nk log N + Nk^2) but may not necessarily construct the same data structure as PLBF. Fast PLBF++ is almost as memory efficient as PLBF.
The authors prove that fast PLBF++ has the same data structure as PLBF when the distribution satisfies a certain constraint.
Strengths: 1. The algorithm and presentation are straightforward and strong theoretical results are provided.
2. The author shows empirically that their construction keeps an edge in practice and can lead to faster PLBF construction.
Weaknesses: The authors may provide further ablation studies on the sensitivity of hyper-parameters in practice and discuss more about societal impact.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: My concerns are raised above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors may include a discussion on the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's positive assessment of our paper. We would like to address their concerns.
> The authors may provide further ablation studies on the sensitivity of hyper-parameters in practice
In response to this comment, we performed an ablation study to verify the sensitivity of the hyperparameters. The results showed that changing the hyperparameters from (N,k)=(1000, 100) to some extent does not dramatically change the false positive rate. In other words, the accuracy is robust to the hyperparameters, at least around (N,k)=(1000, 100). (The reason for considering around (N,k)=(1000, 100) is that previous ablation studies have shown that setting N to about 1000 and k to about 100 is appropriate to keep the false positive rate small and the build time short. For details, see the ablation studies mentioned in response to reviewer cwdf.)
Specifically, we performed a grid search for N=800, 850, 900, ..., 1200, and k=80, 85, 90, ..., 120. The memory usage of the backup Bloom filters was fixed at 500 Kb. The results are summarized as follows.
In the case of URL dataset,
- best case (minimal FPR case): when (N, k)=(950, 120), the FPR was 1.26%.
- worst case (maximal FPR case): when (N, k)=(850, 90), the FPR was 1.31%, i.e., 1.037 times larger than the minimum.
In the case of Ember dataset,
- best case (minimal FPR case): when (N, k)=(1100, 115), the FPR was 0.830%.
- worst case (maximal FPR case): when (N, k)=(1200, 80), the FPR was 0.870%, i.e., 1.047 times larger than the minimum.
Thus, at most, the false positive rate is only 1.037 or 1.047 times larger than the best case. Therefore, around (N, k)=(1000, 100), the accuracy is robust to hyperparameters.
This ablation study will be included in the final version.
> The authors may … discuss more about societal impact.
> The authors may include a discussion on the potential negative societal impact of their work.
The applicability of fast PLBF is broad, including databases and networks. For example, as our experiments suggest, it can be used to detect blacklisted malicious URLs or files. Furthermore, it may improve cache efficiency in databases or optimize routing tables in networks, as in the case of Bloom filters. The scope of fast PLBF fully encompasses and is broader than that of PLBF. In particular, our fast PLBF is expected to have a significant advantage over the original PLBF when applied to applications where the set to be retained changes frequently. This is because in such use cases, the PLBF must be rebuilt repeatedly to maintain accuracy, and fast PLBF has a much shorter construction time than PLBF.
In addition, we believe that fast PLBF is unlikely to have a negative societal impact because it speeds up the construction of PLBF while maintaining its full accuracy.
In the final manuscript, we will expand on the use of the proposed methods in the practical/social applications discussed above. | Summary: The authors propose faster dynamic programming variants of the learned partitioned Bloom filter data structure of Vaidya et al. that requires $O(N^3 k)$ time.
Two solutions are proposed, the first constructs the same data structure in time $O(N^2 k) $ time, and another, that constructs a potentially different structure in $O(Nk\log N+Nk^2)$ time.
Strengths: A significant speedup of a recent paper on a hot and interesting topic. Also, a nice empirical gains in the construction speed.
Weaknesses: In essence, the authors propose DP acceleration techniques without explaining the difference from off-the-shelf methods.
Many such solutions are known for decades (e.g., see https://courses.engr.illinois.edu/cs473/sp2016/notes/06-sparsedynprog.pdf), and while it is possible that they don't cover the exact problem, they seem very similar.
While the authors propose to use the SMAWK algorithm, it seems that the wrapper needed to get from it to the actual dynamic program might also be known.
For example, (https://www.sciencedirect.com/science/article/pii/002001909090215J) solves the following problem:
$$ E[j] = \min_{0\le i<j} D[i] + w(i,j)\ .$$
This looks nearly identical to your Equation (2), and I believe that a small massaging of the notations might yield your $O(N^2 k)$ solution.
Without explaining the difference from known DP acceleration methods, it is very hard to assess the novelty of the presented algorithms.
I am also unclear about how you set $N$ in practice.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: 1. How do you set $N$?
2. Is the error monotone in $N$? (If so, the answer to (1) is simple -- use the largest feasible $N$. I'm not sure otherwise.)
3. Can you explain the differences between your solutions and existing DP acceleration techniques?
4. Vaidya et al. also present an algorithm with runtime $O(N^2 k)$ in their paper, under some assumptions. Can you please comment about whether this is the same conditions you use in Fast PLBF++?
5. In cases where Fast PLBF++ yields a different data structure than PLBF, can you bound the increase in error?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful comments. We are happy to answer the questions raised.
> How do you set N?
Finding the optimal N and k is difficult, but setting N to about 1000 and k to about 100 may be a good choice. The additional ablation study using two real-world datasets shows the following (for details, see the response to reviewer cwdf):
- The FPR is small enough when N\~1000 and k\~100. A larger N or k seems not to cause a much further small FPR.
- When N\~1000 and k\~100, the proposed methods take only a few seconds to build (although the original PLBF takes more than an hour).
Although it is not sure whether N\~1000 and k\~100 make the FPR small enough for unknown datasets, our proposed methods are beneficial because they can always be constructed faster than the original PLBF for any N and k.
> Is the error monotone in N? (If so, the answer to (1) is simple -- use the largest feasible N. I'm not sure otherwise.)
As N increases, the accuracy improves (although not necessarily monotonically), but at the same time, construction time increases. Therefore, depending on the application, N should be determined appropriately, considering the trade-off between accuracy and construction time.
In the case of the original PLBF, the construction time significantly increases as N increases, while the proposed methods do not suffer from such long construction times. Hence, it is easier to balance accuracy and construction time using the proposed methods.
> Can you explain the differences between your solutions and existing DP acceleration techniques?
The two methods we proposed are not novel DP acceleration methods but methods that accelerate the PLBF construction by focusing mainly on the DP part of the PLBF construction.
First, fast PLBF is not a DP acceleration method; it is a method that achieves acceleration by solving a large number of “problems” collectively, whereas PLBF solves them separately. PLBF iterates over all the candidates of the rightmost region and finds the optimal thresholds (i.e., t) and false positive rates (i.e., f) for each candidate. Since there are N-k+1 candidates of the rightmost region, there are N-k+1 "problems" to solve (see Appendix C of the original PLBF paper for details). PLBF solves N-k+1 problems separately, i.e., it computes the N-k+1 DP tables separately. Fast PLBF, on the other hand, computes only one DP table and leverages it repeatedly to solve N-k+1 “problems.” We mathematically showed that the results of PLBF and fast PLBF are identical.
Second, indeed fast PLBF++ introduces a faster calculation of DP and is exactly the same as fast PLBF except for the DP part. However, for this acceleration, it is necessary to discover that "monotonicity" often appears in the DP calculation for PLBF construction. We provided the following:
- An intuitive explanation for the appearance of "monotonicity" (see Figure 7).
- A theoretical proof (Theorem 4.3) that the perfect "monotonicity" appears under the assumption that the score distribution is “ideal” (this assumption is specific to the PLBF problem and is a natural assumption).
- Experimental results indicating that fast PLBF++ achieves almost the same accuracy as (fast) PLBF in many cases, even when the score distribution is not ideal (see Section 5 and Appendix G).
In conclusion, fast PLBF++ is not novel as a DP speedup method, but it is a speedup method proposed by careful analysis of the PLBF problem.
We will include the above discussion in the final manuscript. In addition, we must further discuss the relationship between the proposed methods and general DP acceleration methods pointed out by the reviewer SSjS. Such a discussion will solidify the theoretical connection between our approach and existing DP acceleration methods. Thanks again for the informative and insightful comments.
> Vaidya et al. also present an algorithm with runtime O(N^2k) in their paper, under some assumptions. Can you please comment about whether this is the same conditions you use in Fast PLBF++?
These two assumptions are different.
PLBF's $O(N^2 k)$ method assumes that "the false positive rate is less than 1 for all regions when optimal thresholds and false positive rates are set for each region" (our experiments in Appendix F show that this method can be very inaccurate compared to PLBF's $O(N^3 k)$ method).
On the other hand, fast PLBF and fast PLBF++ do not make this assumption. The assumption made by fast PLBF++ is that "the fraction of positive elements among the elements in the $i$-th segment ($g_i/h_i$) increases monotonically with increasing $i$". Under this assumption, we can prove that fast PLBF++ achieves the same accuracy as PLBF. However, even when this assumption does not hold, fast PLBF++ and PLBF experimentally achieve almost the same accuracy.
> In cases where Fast PLBF++ yields a different data structure than PLBF, can you bound the increase in error?
As mentioned in the Limitation section of the main text, it is a future work to theoretically bound the increase in error. Currently, we have found the following:
- Theoretical proof (provided in Appendix C) shows that fast PLBF++ is consistent with PLBF under ideal conditions.
- Experiments on real datasets (in Section 5) show that fast PLBF++ consistently achieves almost the same accuracy as PLBF.
- Experiments on many artificial data (in Appendix G) show that the accuracy of fast PLBF++ rarely seems to deteriorate catastrophically, unless the situation is very far from ideal.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal.
Can you please relate to whether "Concave-1D" (https://www.sciencedirect.com/science/article/pii/002001909090215J) is applicable to your problem and what would the resulting runtime be?
It solves the following DP problem:
$$ E[j] = \min_{0\le i<j} D[i] + w(i,j)\ ,$$
and it seems that a massaging of the notations might yield a $O(N^2 k)$ solution.
Also, is it possible to generalize your solution to a general DP setting of the above form? What would be the requirements from $E,D,w$, and what would be the resulting runtime?
If you can reformulate your solution as a general DP technique it can have impact much greater than accelerating PLBF. | Summary: The paper presents two improvements of the algorithm PLBF (Partitioned Learned Bloom Filter), called fast PLBF and fast PLBF++. PLBF learns the distribution structure and uses it to minimize the memory allocation. An ML model (LightGBM is used in this paper) is trained to predict a score between [0,1] indicating set membership probability. The score space is divided into N segments and then grouped into k<N regions that will each use a backup Bloom filter. The grouping of N segments into k region is formulated as an optimization problem and solved by dynamic programming, with time complexity O(N^3 k) for PLBF. The new method fast PLBF carefully refines the dynamic programming steps (by avoiding some redundant computation) and builds the same data structure but in O(N^2 k). The fast PLBF++ can construct a slightly different data structure, but faster; under some conditions, fastPLBF++ also constructs the same data structure. Experimental evaluation shows the performance of the two schemes.
Strengths: Learned Bloom filters are a useful data structure. Improving the construction time for PLBF allows for the construction of filters for larger values of N, which improves their performance. The experimental evaluation is good.
Weaknesses: It is not clearly explained what computations are redundant in PLBF and how they are avoided by fast PLBF.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I would like to see more space devoted in the paper to the explanation of fast PLBF compared to PLBF. Can you explain in more detail, or even intuitively, what computation is redundant in PLBF and how it is avoided by the new scheme? All of it is deferred to the appendix, but even there it is not very clear. This is one of the main points of the paper and I think more justification should be included in the paper itself, not in the appendix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful comments. We are happy to provide additional clarification on the unclear points raised.
> It is not clearly explained what computations are redundant in PLBF and how they are avoided by fast PLBF.
> Can you explain in more detail, or even intuitively, what computation is redundant in PLBF and how it is avoided by the new scheme?
> This is one of the main points of the paper and I think more justification should be included in the paper itself, not in the appendix.
Let us explain our approach intuitively. While PLBF builds many DP tables, our fast PLBF builds only one DP table and reuses it many times. Specifically, by exploiting the fact that the largest DP table $\mathrm{DP}^N_\mathrm{KL}$ contains other smaller DP tables, fast PLBF builds only $\mathrm{DP}^N_\mathrm{KL}$ and reuses it. This is a very simple speedup method, but it only becomes apparent by reorganizing the optimization problem and solution (which was not well organized in the original PLBF paper) and discovering that $j$ plays no role in a $\mathrm{DP}^j_\mathrm{KL}$ computation.
Let us further elaborate on the optimization problem and how PLBF and fast PLBF solve it. To construct PLBFs, we need an optimal $\mathbf{t}$ and $\mathbf{f}$, where $\mathbf{t}$ are the thresholds for partitioning the score space into several regions and $\mathbf{f}$ are the false positive rates for each region. To find the optimal values, we must iterate over all the candidates of the “rightmost region” and finds the optimal $\mathbf{t}$ and $\mathbf{f}$ for each candidate (see Appendix C of the original PLBF paper for details). Since there are $N-k+1$ candidates of the rightmost region, there are $N-k+1$ “problems” to solve. PLBF solves $N-k+1$ problems separately, i.e., it computes the $N-k+1$ DP tables separately. Fast PLBF, on the other hand, computes only one DP table and leverages it repeatedly to solve $N-k+1$ problems. We mathematically showed that the results of PLBF and fast PLBF are identical.
We will add this discussion to the final paper and detail the description of fast PLBF. | Summary: This paper improves upon the Partitioned Learned Bloom Filter (PLBF), which is an essentially optimal learned Bloom filter introduced in ICML 2021. Construction time of PLBF is $O(N^3k)$, where $N > k$ are hyper-parameters. The authors show how to construct the same filter much more quickly; their Fast Partitioned Learned Bloom Filter (FPLBF) produces the same filter as PLBF and takes only $O(N^2k)$ time to construct. They also propose the fast PLBF++ algorithm that runs in even quicker $O(Nk\log N + Nk^2)$ time. It outputs the same filter under natural assumptions on the input and is essentially just as good experimentally. Extensive experimental evaluation with real world datasets and ablation studies support the claims and demonstrate significant practical speedups.
Strengths: 1) Bloom filters are a ubiquitous approximate set membership data structure. Their combination with learned predictors has been an active and fruitful direction of research in the past years. This paper is another significant step.
2) The authors are right (and the first to discover to the best of my knowledge) spot the redundancy in the PLBF paper and save an $O(N)$ factor.
3) Fast PLBF++ is a good connection with the monotone matrix max problem. The authors prove that under natural assumptions it provides the same output even quicker. It works well with real data even if the assumptions are not 100% satisfied.
4) Detailed experimental evaluation, especially kudos for the ablation in Section 5.3.
Weaknesses: 1) The PLBF authors were rather sloppy by rebuilding the same dynamic programming table from scratch when all they needed was adding another ''row'' to it. In fact looking at eq (2) in the paper it's quite obvious that superscript $j$ plays no role in $\mbox{DP}^j$ and a single DP table suffices. Nevertheless, credit is due for spotting this.
2) Practical runtime gains are potentially overstated as they depend on the choice of hyper-parameters $N$ and $k$. The paper shows runtime improvements for $N=1000$ and $k=5$, these hyperparameters are copied from the PLBF ICML 2021 paper, where they appeared without much justification. The (excellent) ablation experiments suggests that much smaller $N$ and much larger $k$ are optimal, which would decrease the runtime gaps in practice.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Figure 12 shows that $N=50$ or $N=100$ max are sufficient for minimizing the false positive rate (for a given memory budget) for $k=5$. Figure 13 indicates that $k=100$ or even higher is required for minimizing the false positive rate for $N=1000$. Could you please find the $(N,k)$ combination that approximately minimizes the false positive rate and then measure and show the construction times for that? Could you please also zoom into the FPR plots around their minima without using log-scale on the x-axis (perhaps in the appendix) as I could not determine the location of the FPR minima in terms of $N$ even when I viewed the enlarged pdf on my screen,
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes, it's adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation of our paper. Let us answer the questions.
> Could you please find the (N,k) combination that approximately minimizes the false positive rate and then measure and show the construction times for that?
In Appendix E, we performed an ablation study on the hyperparameters N and k. In response to this comment, we performed an additional, more comprehensive ablation study.
First, we performed a coarse grid search with N=10,20,50,100,200,500,1000 and k=3,5,7,10,20,50,100,200,500,1000 (note that k<=N). The memory usage of the backup Bloom filters was fixed at 500 Kb. The results are summarized as follows.
In the case of URL dataset,
- for PLBF: the FPR was minimal (1.25%) when (N, k)=(1000, 500).
- the construction time is 7.5 hours.
- for fast PLBF: the FPR was minimal (1.25%) when (N, k)=(1000, 500).
- the construction time is 95.0 seconds (284 times faster than original PLBF).
- for fast PLBF++: the FPR was minimal (1.26%) when (N, k)=(1000, 500).
- the construction time is 13.1 seconds (2064 times faster than original PLBF).
In the case of Ember dataset,
- for PLBF: the FPR was minimal (0.829%) when (N, k)=(500, 50).
- the construction time is 366 seconds.
- for fast PLBF: the FPR was minimal (0.829%) when (N, k)=(500, 50).
- the construction time is 7.98 seconds (45 times faster than original PLBF).
- for fast PLBF++: the FPR was minimal (0.857%) when (N, k)=(200, 50).
- the construction time is 5.67 seconds (64 times faster than original PLBF).
These results coarsely give the (N, k) combination that minimizes the FPR. They also demonstrate that for the coarse optimal (N, k) settings, the original PLBF requires a substantial construction time, ranging from several minutes to several hours. On the other hand, our two proposed methods take only a few seconds to tens of seconds to build.
> Could you please also zoom into the FPR plots around their minima without using log-scale on the x-axis (perhaps in the appendix) as I could not determine the location of the FPR minima in terms of N even when I viewed the enlarged pdf on my screen.
Next, we performed a more detailed grid search of the fast PLBF accuracy around (N, k)=(1000, 500) for the URL dataset and (N, k)=(500, 50) for the Ember dataset. Specifically, N=500,750,1000,1250,1500 and k=200,400,500,600,800,1000,1250,1500 for the URL dataset and N=200,400,500,600,800,1000 and k=20,40,50,60,80,100 for the Ember dataset (again, note that k<=N). The memory usage of the backup Bloom filters was fixed at 500 Kb.
In the case of URL dataset,
- the FPR was minimal (1.23%) when (N, k)=(1500, 800).
In the case of Ember dataset,
- the FPR was minimal (0.829%) when (N, k)=(500, 50).
This comprehensive ablation study and its appropriate visualization will be added to the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments.
If you read it in time: what were the speedups in the end with the optimal (N, k) parameters?
I also appreciate Reviewer SSjS's insightful comments and promptly connecting these impactful practical results with theory of accelerated dynamic programming.
---
Reply to Comment 1.1.1:
Comment: We appreciate cwdf’s question and are happy to answer it.
We are sorry, but due to lack of time, we cannot give a full experimental answer as to what the speedups were in the end with the optimal $(N, k)$. Although our estimates are included, the degree of speedup with the optimal $(N, k)$ can be summarized as follows:
For the case of the URL dataset (minimum FPR is achieved with $(N, k)=(1500, 800)$),
- Fast PLBF is **estimated** to be about 380 times faster than PLBF.
- Fast PLBF++ is **estimated** to be about 3500 times faster than PLBF.
For the case of the Ember dataset (minimum FPR is achieved with $(N, k) = (500, 50)$),
- Fast PLBF is 45 times faster than PLBF.
- Fast PLBF++ is 62 times faster than PLBF.
The estimation was done as follows: the construction time of PLBF is asymptotically proportional to $N^3$ and proportional to $k$. With $(N, k) = (1000, 500)$, PLBF took 7.5 hours to build. So, we estimate that it would take about 40 hours to build with $(N, k) = (1500, 800)$. Note that the discussion period ends 3 hours from now, so we cannot run this experiment. With $(N, k)=(1500, 800)$, Fast PLBF takes 376 seconds, and Fast PLBF++ takes 40.6 seconds (these are actual measurements, not estimates). Thus, Fast PLBF and Fast PLBF++ are estimated to be about 380 and 3500 times faster than PLBF, respectively.
We will include the actual experimental results in the final version. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CluB: Cluster Meets BEV for LiDAR-Based 3D Object Detection | Accept (poster) | Summary: This paper combines two streams of LiDAR-based 3D object detectors into a single model: conventional BEV-based detectors and emerging cluster-based detectors. The main contribution is that It is the first one to combine the two streams of methods, leveraging the high-recall of BEV representation and fine-grained structure of cluster representation. It reveals the potential of such a combination, providing a possibility for a new sub-direction.
Strengths: - This paper is well written with clear representation and concise illustration. For example, Figure 2 is very informative and concise, and I could clearly get the overall idea at first glance.
- To my best knowledge, it is the first one to explicitly combine the two streams of methods, leveraging the high-recall of BEV representation and fine-grained structure of cluster representation. Although straightforward, the idea is well-motivated and makes sense to me.
- Every module is well ablated. Overall performance on Waymo Open Dataset and nuScenes dataset is good. Especially, on nuScenes, it achieves state-of-the-art performance.
Weaknesses: - Although the overall writing is clear, the authors are not very careful with some details. For example, Figure 1 (a) is very similar to a figure in the FSD paper, and it would be appropriate to add a citation. In L108, the citations should be placed after "benchmarks" instead of detectors. In Eqn (4), $Ldir$ should be $L_{dir}$. There are many errors in Table 1: the best and second-best entries of L2 Vehicle should be PillarNet and PVRCNN++. The performance of FSD seems inaccurate because it has a similar vehicle performance to PVRCNN++ if I recall correctly. Please check. The second-best Pedestrian L2 APH is not underlined. And this table size is out of the range on the right.
- Although experiments thoroughly verify the effectiveness of proposed modules, they are straightforward ablation without insight. I encourage authors to add more detailed performance analysis to reveal the inner workings of CluB. For example, how do the two kinds of queries affect the final performance? Since BEV-based representation has higher recall and cluster-based representation preserves fine-grained structure, authors should design experiments to demonstrate such properties, which cannot be demonstrated by simple numbers of the final performance. Table 1 in supplementary materials should be moved to the main paper, and the current 9-page space is not fully utilized.
- I encourage authors to conduct runtime evaluations and report the latency.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I like the idea of combining the two kinds of representation. However, the authors should carefully address my concerns.
The manuscript should be updated in the OpenReview system if possible to address the first weakness. Additional experiments could be simply posted on this page for now.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and detailed comments. We respond to your comments below.
> *detail errors and format correct*
Thanks for pointing out these typos. We have addressed all the errors you mentioned, and we will proofread them very carefully. Since the manuscript can not be updated in the OpenReview system at present, we will make sure the fore-mentioned errors are corrected in the final manuscript.
> *Errors of Table 1.*
Thanks for pointing it out. We make the corresponding corrections to address the problems.
- We checked the performance of FSD and modified the numbers in the revised Table 1. The best vehicle performance in the FSD (70.1%) paper is indeed similar to PVRCNN++ (70.2%) in terms of APH/L2.
- As FSD adopts a longer point cloud range, we apply the same input range on our CluB to further enhance the performance (71.4% mAPH/L2). The enhanced CluB is trained on 8 NVIDIA RTX A6000 GPUs with larger batch size of 4. The latest result is also reported on the revised Table 1.
- We have carefully proofread the label of the best and second-best entries and revised the table size as pointed out.
Since the manuscript can not be updated in the OpenReview system at present, we present **part of the revised Table 1** (Table 5 in the PDF). Thank you again for your detailed thorough review.
> *Although experiments thoroughly verify the effectiveness of proposed modules, they are straightforward ablation without insight. I encourage authors to add more detailed performance analysis to reveal the inner workings of CluB. For example, how do the two kinds of queries affect the final performance? Since BEV-based representation has higher recall and cluster-based representation preserves fine-grained structure, authors should design experiments to demonstrate such properties, which cannot be demonstrated by simple numbers of the final performance.*
Thanks for your insightful comment. Based on your suggestion, we conduct both quantitative and qualitative experiments to show how the two cluster queries and BEV queries affect the final performance. We first provide the quantitative results on the Waymo dataset in the table below (Table 4 in the PDF).
Compared to CluB (the first row), if we remove the cluster queries (the second row), we observe a significant drop in the performance of cyclist and pedestrian classes. This decline suggests that cluster queries play a beneficial role in enhancing the diversity of object queries, particularly for fine-grained targets. By removing the BEV queries alone (the last entry), overall performance drops over 6.3% mAPH/L2. We conclude that BEV queries directly indicate the possible positions of objects on the BEV space, making it easier for the decoder to detect target objects. Therefore, BEV queries and cluster queries have their respective advantages in query initialization for the decoder, thus enhancing detection performance.
To intuitively demonstrate the contribution of these two queries, we also visualize the two kinds of queries on the same BEV features using Waymo dataset, which can be seen in the Figure 3 of the PDF. The yellow square represents the BEV query activated by the BEV heatmap, while the pink cross signifies the cluster query derived from vote center positions in 3D space. The red circle denotes a potential object initialization position overlooked by the BEV query. This comparison reveals that BEV queries provide a more comprehensive coverage of potential object locations, whereas cluster queries enrich object diversity with finer granularity.
We will add the experimental results and analysis in the future revision. Thanks again for your advice.
| BEV queries | Cluster queries | Vehicle | Pedestrian | Cyclist | mAPH/L2 |
|-------------|-----------------|---------|------------|---------|---------|
| √ | √ | 62.9 | 64.4 | 64.5 | 63.9 |
| √ | | 62.6 ↓0.3 | 62.7 ↓1.7 | 62.0 ↓2.5 | 62.5 ↓1.4 |
| | √ | 56.9 ↓6.0 | 57.7 ↓ 6.7 | 58.1 ↓6.4 | 57.6 ↓6.3 |
> *I encourage authors to conduct runtime evaluations and report the latency.*
Thank you for the suggestion. We evaluate the runtime on single NVIDIA GEFORCE RTX 3090. The computational overhead of CluB is 1.2 times that of the BEV-only baseline in terms of latency. Please see the table below:
| | Baseline | CluB |
|--------------|----------|------|
| Latency (ms) | 144 | 167 |
---
Rebuttal Comment 1.1:
Title: Authors have addressed most of my concerns.
Comment: Thanks for your feedback and I am glad to read such an informative rebuttal. Fig. 3 is exactly what I suggested, and I believe more analysis can be done to gain deeper insight (I am not asking for them now). Before my final decisions, I have a few more concerns:
- Although FSD uses a longer range, I do not think a longer range could lead to notable improvements in Waymo because there are very few objects beyond 75 meters. Are you sure that your performance improvement of the enhanced version is from adopting a longer range?
- When I refer to the original FSD paper, I found you cited the wrong paper (CenterFormer) in Table 1.
No more experiments are needed, just discussion.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer UKd9
Comment: We sincerely appreciate the suggestions and comments. Your insightful comments have greatly enhanced the quality of this paper. We address the concerns and questions you raised as follows.
>*Although FSD uses a longer range, I do not think a longer range could lead to notable improvements in Waymo because there are very few objects beyond 75 meters. Are you sure that your performance improvement of the enhanced version is from adopting a longer range?*
Thanks for your valuable comments. In fact, we also increased the batch size from 2 to 4 and trained the enhanced model on 8 A6000 GPUs as mentioned in the previous response. These factors may also contribute to the improved performance. We will further attempt to pinpoint the exact reason and provide a more accurate explanation in the camera-ready version.
>*When I refer to the original FSD paper, I found you cited the wrong paper (CenterFormer) in Table 1.*
Thank you for kindly pointing it out. We will proofread Table 1 very carefully and make sure the aforementioned error is corrected in the final version.
Please kindly let us know if there are any other issues that have not been fully addressed. We would be more than happy to engage in further discussion to clarify them. Thank you for your time and attention. | Summary: This work combines the BEV-based and cluster-based representations into a unified framework named CluB. At the feature level, the Cluster Feature Diffusion module and the imitation loss are proposed to fuse the features obtained from the BEV branch and cluster branch. At the query level, the Cluster Query Generation module and the direction loss are proposed to provide more accurate object queries from the cluster branch. CluB achieves state-of-the-art performance on the Waymo and nuScenes datasets.
Strengths: This work mentions that the features of the object centers in BEV-based detectors are diffused from their neighbors and are not in high quality, which is an interesting hypothesis. Based on that, the cluster branch is applied to provide clustered features of the object centers. The whole structure of the work is compact and unified, and the promotion on precision is clearly shown.
Weaknesses: 1. The argumentation of the core hypothesis is insufficient. I think that the max pooling in BEV-based detectors is able to reduce the feature shape and aggregate the feature of the object to the center, just like the cluster-based methods. Additional quantitative analysis is preferred to show the difference.
2. The captions of figures and tables are too long and mainly repeat the text of the main body.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Some results in the ablation study do not match with each other. For example, the mAPH of Table 4 (b) should be the same as (d), but in fact they are not. Are there differences between the two configurations?
2. I am wondering why CluB performs better on the cyclist class of Waymo, because the point clouds of cyclists are more compact than the vehicles and should benefit less from the cluster branch.
3. How efficient is CluB compared to the baseline? The time cost could be significantly increased by applying U-Net in the cluster branch.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: It would be nice to apply different BEV-based detectors as the baseline to test the universality of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and detailed comments.
> *The argumentation of the core hypothesis is insufficient. I think that the max pooling in BEV-based detectors is able to reduce the feature shape and aggregate the feature of the object to the center, just like the cluster-based methods. Additional quantitative analysis is preferred to show the difference.*
Thanks for your thought-provoking comment. Accordingly, we conducted an additional experiment that quantitatively verifies max pooling in BEV-based detectors is not able to work like the cluster-based method. The results are summarized as the following table (Table 2 in the PDF).
| Method | Vehicle | Pedestrian | Cyclist | mAPH/L2 |
|------------------|---------|------------|---------|---------|
| Baseline | 61.0 | 60.4 | 60.1 | 60.5 |
| Baseline_maxpool | 60.5 | 59.9 | 59.1 | 60.0 |
| Baseline_CFD | **61.9** | **61.2** | **62.5** | **61.9** |
Specifically, Baseline_maxpool takes sparse voxels directly from the sparse U-Net and transforms them into a dense BEV feature. Subsequently, several pre-defined max pooling layers are applied to amplify the object's feature towards its center. In contrast, Baseline_CFD replaces the multiple max-pooling layers with our cluster feature diffusion (CFD) module. This module dynamically merges cluster features with BEV features to enhance the center's feature.
As demonstrated in the table, the mAPH/L2 is dropped from 60.5% to 60.0% for Baseline_maxpool, which demonstrates that max pooling operations on BEV features cannot effectively aggregate features to the center of the target to enhance the representation of target features. In comparison, the cluster representation with its adaptable receptive fields demonstrates superior effectiveness in achieving this enhancement. We will include the experimental results and analysis in the future revision. Thanks again for your advice.
> *The captions of figures and tables are too long and mainly repeat the text of the main body.*
Thanks for your kind advice. We will shorten the captions of figures and tables in the revised manuscript as suggested.
> *The mAPH of Table 4 (b) should be the same as (d), but in fact they are not. Are there differences between the two configurations?*
Thanks for the comment. The configurations of methods (b) and (d) in Table 4 are in fact different. Compared to the baseline, Method (b) adds the CFD module (feature-level) while Method (d) introduces the CQG module (query-level).
Specifically, Method (b) adds the Cluster Feature Diffusion (CFD) module compared to the baseline BEV-based detector, which adaptively generates diffused vote features in the unified BEV space for the feature representation enhancement. Method (d) only introduces the Cluster Query Generation (CQG) module compared to the baseline, which enriches the diversity of object queries by using the positions of voting centers from the cluster branch.
> *I am wondering why CluB performs better on the cyclist class of Waymo, because the point clouds of cyclists are more compact than the vehicles and should benefit less from the cluster branch.*
Actually, the cluster-based branch works better when the point clouds are more compact, which means the improvements are more significant for cyclists than that for vehicles. Due to the fixed voxelization and over-downsampling, it is challenging to capture sufficient details of small objects with the BEV-based branch. On the contrary, the cluster branch becomes highly beneficial to extract fine-grained features from sparser points with smaller cluster voxel size, which leads to notably performance improvement.
> *How efficient is CluB compared to the baseline? The time cost could be significantly increased by applying U-Net in the cluster branch.*
Thank you for the suggestion. The computational overhead of CluB is 1.2 / 1.3 times that of the BEV-only baseline in terms of latency / FLOPs respectively. We present the efficiency-related statistics of our CluB and baseline (w.o. cluster branch) on Waymo dataset below:
| | Baseline | CluB |
|--------------|----------|--------|
| Latency (ms) | 144 | 167 |
| FLOPs (G) | 95.71 | 112.35 |
The time cost is not significantly increased for the following reasons. First, the decoder of the U-Net is shared between both the BEV-based baseline and our approach to extract 3D voxel features. Besides, the cluster branch primarily involves fully sparse operations [1] on the given point set. The latency is evaluated using one NVIDIA GEFORCE RTX 3090 GPU.
> *It would be nice to apply different BEV-based detectors as the baseline to test the universality of the method.*
Thanks for the valuable suggestion. We choose CenterPoint [2], which is also a prevalent BEV baseline in the community, to test the universality of the method. We provide the results in the following table(Table 3 in the PDF).
CenterPoint is an anchor-free one-stage detector, which extracts BEV features from voxelized point clouds to find object centers and regress to 3D bounding boxes.
To adapt the CluB framework for this comparison, we excluded the query-level enhancement, since CenterPoint does not employ a transformer architecture. Remarkably, our method still achieves higher accuracy (from 67.4% to 68.2%) compared to CenterPoint, **even when utilizing only feature-level enhancement**. We have added these results to the appendix of the revised version to show the universality of the method.
| Method | Vehicle | Pedestrian | Cyclist | mAPH/L2 |
|-------------|---------|------------|---------|---------|
| CenterPoint | 67.9 | 65.6 | 68.6 | 67.4 |
| +CluB | 68.4 | 66.5 | 69.6 | 68.2 |
[1] Lue Fan et al., Fully Sparse 3D Object Detection, NeurIPS 2022.
[2] Tianwei Yin et al., Center-Based 3D Object Detection and Tracking, CVPR 2021.
---
Rebuttal Comment 1.1:
Title: Raise my score
Comment: The authors address most of my concerns and I thus decide to upgrade my score to Borderline Accept | Summary: The authors explore and analyze the existing LiDAR-based 3D object detection framework, and propose to adopt BEV-based and cluster-based methods to aggregate those features from LiDAR input. The experimental results on Waymo and nuScenes are better compared to the existing methods.
Strengths: 1. The task of 3D object detection is very important in the 3D community. Interestingly, the authors propose to extract and combine BEV and cluster-based features.
2. The paper is easy to follow.
3. The authors conduct the experiments on two widely-used datasets, including nuScenes and Waymo.
Weaknesses: 1. Computation/memory footprint comparison. The authors didn't make a comparison of their work in terms of memory/speed with the existing 3D detection methods. The time consumption might be large since the proposed method includes transforming the vote cluster to BEV matching and aggregation.
2. Most 3D object detection SOTA is multi-frame based. It would be interesting to see how the multiple frames fitted into the proposed architecture.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the questions that I describe in the Weakness part. I would also consider the rebuttal and other reviews.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes. The authors presented one of the limitations in terms of computational tradeoffs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We respond to your comments below
> *Computation/memory footprint comparison.*
Thank you for the suggestion. We have computed the statistics for computation and memory usage of our CluB model and the widely used BEV-based baseline detector Transfusion-L [1] on a single NVIDIA GEFORCE RTX 3090 GPU.
As shown in the table, the computational overhead of CluB is 1.2 times that of the BEV-only baseline in terms of latency. Meanwhile, the FLOPs and memory cost do not go up significantly as the cluster branch primarily involves fully sparse operations on the given point set [2].
| | Baseline | CluB |
|-----------------|------------|------------|
| FLOPs (G) | 95.71 | 112.35 |
| Latency (ms) | 144 | 167 |
| GPU Memory (MB) | $1.11 × 10^4$ | $1.38 × 10^4$ |
> *It would be interesting to see how the multiple frames fitted into the proposed architecture.*
Thanks for your constructive feedback. It points out an insightful and crucial direction for our future research. We think it is possible to fit multiple frames into the CluB framework. We kindly refer you to the PDF (**Figure 2**) for the overview of the framework. Next, we give a detailed illustration of multi-frame-based CluB architecture.
Since our CluB is a query-based 3D detector, we could conveniently model the temporal interaction by utilizing object queries. This is inspired by the object-query-centric temporal modeling architecture of StreamPETR [3]. As illustrated in Figure 2 of the PDF, we first build a memory queue to store the historical object queries. The current queries from our CluB framework are fed into the propagation transformer to interact with historical queries and current BEV features, obtaining temporal and spatial information. The output queries are further used to generate detection results and the top-K queries are pushed into the memory queue. Through the recurrent update of the memory queue, the long-term temporal information is propagated frame by frame. Note that the memory queue follows the first-in, first-out (FIFO) rule.
We would take this constructive suggestion and extend our CluB framework to a multi-frame version in the future.
[1] Xuyang bai et al., TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers, CVPR 2022.
[2] Lue Fan et al., Fully Sparse 3D Object Detection, NeurIPS 2022.
[3] Shihao Wang et al., Exploring Object-Centric Temporal Modeling for Efficient Multi-View 3D Object Detection, arXiv 2023.
---
Rebuttal Comment 1.1:
Title: Authors have addressed most of my concerns.
Comment: Thanks for the answers and clarification in the rebuttal, which covered most of my concerns. | Summary: The paper introduces a new 3D object detection framework called CluB that combines the strengths of BEV-based and cluster-based detectors. CluB effectively integrates the context-aware BEV representation and object-centric cluster representation at both the feature level and the query level.
The proposed method outperforms state-of-the-art methods by a remarkable margin on two prevalent datasets, i.e., Waymo Open Dataset and nuScenes, demonstrating the effectiveness and generality of the proposed method.
Strengths: 1. The paper introduces a novel framework that combines the strengths of BEV-based and cluster-based detectors at both the feature level and the query level. The proposed combination method is of a certain degree of novelty.
2. The paper is well-written and easy to follow. The authors provide clear explanations of the proposed method and the experimental results.
3. The paper presents a well-designed and comprehensive experimental evaluation of the proposed CluB framework. The authors provide detailed analysis and comparison with state-of-the-art methods on prevalent datasets.
Weaknesses: 1. Limited novelty: While the CluB framework is unique and original in its approach to integrating BEV-based and cluster-based detectors, the paper does not introduce any fundamentally new concepts or techniques. The proposed method is built on existing methods and combines them in a novel way. The novelty is limited for NeurIPS.
2. Limited analysis of real-time performance: The paper does not provide a detailed analysis of the real-time performance of the proposed CluB framework. To address this weakness, the authors could provide a more detailed analysis of the real-time performance of the proposed method and explore ways to make the method more suitable for real-time scenarios.
3. Limited analysis of the combination result of two level: According to the ablation study in table 4. The comprehensive improvement effect of the two levels (3.4%) is greater than the sum of their individual improvement effects (2% + 1.1%). I think this may be the essential reason why the proposed framework is effective, but this issue lacks in-depth analysis.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. According to the expression in the paper, Diffused Vote BEV features and Dense BEV features represent the center point semantic information and edge information respectively. But in the design of the Club, Imitation Loss is used to make the two similar. This seems to be contradictory to the previous statement. Can you give some further explanation?
2. As stated in Weakness #3, can you provide a more detailed analysis of the the combination result of two level?
2. Can you provide a more detailed analysis of the computational complexity?
3. Can you provide a more detailed analysis of the impact of the proposed CluB framework on real-time performance?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and detailed comments. We respond to your comments below.
> *According to the expression in the paper, Diffused Vote BEV features and Dense BEV features represent the center point semantic information and edge information respectively. But in the design of the Club, Imitation Loss is used to make the two similar. This seems to be contradictory to the previous statement. Can you give some further explanation?*
>
Thanks for your valuable comment. The imitation loss is leveraged to align the center feature between the cluster branch and the BEV branch in the `align-and-fusion` manner, instead of erasing the different information of these two branches. Particularly, as presented in Equation (2) and Equation (3), the valid pixels with imitation loss are within the center region, while no constraint has been added for the edge. In the revision, we will include this explanation in Section 3.3.
> *As stated in Weakness #3, can you provide a more detailed analysis of the combination result of two level?*
>
Thank you for the thought-provoking comment. It is possible that the overall performance improvement is greater than the summation of their individual improvement. When the two-level enhancements are applied together, the enriched queries are in fact utilized by the enhanced feature from the feature-level. We provide a more detailed analysis as follows.
Accordingly, we provide results for each category in the following table (Table 1 in the PDF). Compared with BEV-based baseline method (a), the performance improvements are similar over three classes when applying feature-level enhancement. As the features are enhanced in the first step, the improvements for adding queries furthermore are more pronounced, which indicates these two-level enhancements could work synergistically. We observe the same phenomenon you mentioned is mainly on small objects, e.g., 4.0% > 2.3% + 1.4% for pedestrians, which shows that the CluB framework is more effective in capturing fine-grained targets.
We will add the experimental results and analysis in the future revision.
| Method | Feature-level | Query-level | Vehicle | Pedestrian | Cyclist | mAPH/L2 |
|--------|---------------|-------------|----------|------------|----------|----------|
| (a) | | | 61.0 | 60.4 | 60.1 | 60.5 |
| (b) | √ | | 62.6 ↑1.6 | 62.7 ↑2.3 | 62.0 ↑1.9 | 62.5 ↑2.0 |
| (c) | | √ | 61.7 ↑0.7 | 61.8 ↑1.4 | 61.4 ↑1.3 | 61.6 ↑1.1 |
| (d) | √ | √ | 62.9 ↑1.9 | 64.4 ↑4.0 | 64.5 ↑4.4 | 63.9 ↑3.4 |
> *Can you provide a more detailed analysis of the computational complexity?*
>
Thank you for the suggestion. The computational overhead of CluB is 1.3 times that of the BEV-only baseline in terms of FLOPs. The computation cost is affordable since the auxiliary cluster branch is built with fully sparse operation, such as sparse instance recognition (SIR) module [1]. Please refer to the table below:
| | Baseline | CluB |
|-----------|----------|--------|
| FLOPs (G) | 95.71 | 112.35 |
>*Can you provide a more detailed analysis of the impact of the proposed CluB framework on real-time performance?*
Thank you for the suggestion. We evaluate the runtime on one NVIDIA GEFORCE RTX 3090. The computational overhead of CluB is 1.2 times that of the BEV-only baseline in terms of latency. Please see the table below:
| | Baseline | CluB |
|--------------|----------|------|
| Latency (ms) | 144 | 167 |
[1] Lue Fan et al., Fully Sparse 3D Object Detection, NeurIPS 2022. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their detailed and insightful comments, as well as the favorable recommendations. We also thank the area chair for your time and efforts in handling our paper. We appreciate the positive comments, e.g., "well-written" and "novel motivation" from Reviewer Z2j1, "well-designed and comprehensive evaluation" from Reviewer oemX, "easy to follow" from Reviewer BdkH, "interesting hypothesis" and "compact structure" from Reviewer KK49, "well-motivated","concise illustration" and "well-ablated" from Reviewer UKd9.
Following the valuable comments and suggestions, we carefully revised the manuscript and updated the suggested experiments on the supplementary material. We hereby refer the reviewers to the detailed responses to each comment. We also upload a PDF with tables and figures due to the limited characters of each response. We provide a summary of changes presented in the attached PDF.
- Tables
- Table 1: As suggested by Reviewer oemX, we provide a more detailed comparison on the effect of the two-level enhancement in CluB on individual classes, showing that the two-level enhancements could work synergistically.
- Table 2: As suggested by Reviewer KK49, we conduct an ablation study on different ways to aggregate the features to the center of the object, showing the effectiveness of cluster representation with its adaptable receptive.
- Table 3: As suggested by Reviewer KK49, we show the experimental result of applying CluB framework on the different BEV-based detector, which demonstrates the universality of our method.
- Table 4: As suggested by Reviewer UKd9, we conduct an ablation study on the effect of the two kinds of object queries for the transformer decoder, showing the two have respective advantages in query initialization for the decoder.
- Table 5: As suggested by Reviewer UKd9, we present part of the revised Table 1 of the manuscript.
- Figures
- Figure 1: As suggested by Reviewer Z2j1, we provide a detailed illustration of the Cluster Feature Diffusion (CFD) module, which is carefully revised based on the Figure 3 of the manuscript.
- Figure 2: As suggested by Reviewer BdkH, we give an illustration on the possible multi-frame version of our proposed CluB, which is inspired by the recent work named StreamPETR.
We look forward to discussing with you over the next few days.
Pdf: /pdf/4a292c4d4f23a282d0fb54a53979b556620d1633.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a CluB framework to improve the accuracy of 3D object detection by taking advantage of both BEV-based and cluster-based paradigms. The motivation is that the cluster features in voting-based cluster method can largely preserve the 3D structure details of each object, thus supplements the weakened center point features in BEV-based convolutional methods. To achieve the above goals, the authors propose a Cluster Feature Diffusion (CFD) module that adaptively diffuses the valid votes on a vote BEV and fuse it with dense BEV features. An imitation loss is also introduced to transfer object-centric knowledge to the BEV branch and encourage the stability of overall representation learning. Meanwhile, a Cluster Query Generation (CQG) module is proposed to enrich the diversity of object queries by using the voting centers from the cluster branch. Extensive experiments are conducted on the Waymo and NuScenes dataset. Both the ablation studies and the comparison with the SOTA methods demonstrate the effectiveness of the proposed method.
Strengths: Overall the paper is well-written and well-structured.
The experiment results are convincing.
The motivation of combining the BEV-based and voting-based method is somewhat novel.
From the technical aspect, although the design of CFD and CQG modules is vulgaris, however it is also effective.
Weaknesses: The demonstration of the class-aware BEV diffusion is not clear. How to leverage the classification results to control the expansion magnitude? I think there should be a formal formula. Figure 3 is also confusing, how to get Class-aware Vote BEV?
There is a lack of discussion of Model Efficiency. It seems that the cluster branch is time-consuming.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Intuitively, it seems like that the convolution operation performed on feature maps is also like an implicit clustering method that aggregates the surrounding features. Is it necessary to make such big effort to introduce a new cluster branch? If you want to alleviate the phenomenon that a stack of convolution layers will weaken the capability of presenting an object with the center point and reduce structure information, how about using deformable convolution?
Other question, please refer the weakness part above and give your demonstration.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We respond to your comments below.
> *The demonstration of the class-aware BEV diffusion is not clear. How to leverage the classification results to control the expansion magnitude? Figure 3 is also confusing, how to get Class-aware Vote BEV?*
Thanks for your valuable comments. The expansion magnitute is adapted according to the predictions by setting different max pooling kernel sizes. Specifically, on Waymo Open Dataset, we set the kernel size to 1, 3 and 5 if a cluster is with maximum likelihood to be pedestrian, cyclist, and vehicle, respectively. Here we adopt the different expansion magnitute since they are in different scales. These details are included in Section 4.1 of the manuscript. We will further emphasize them.
Besides, we have revised Figure 3 (shown in **Figure 1 of the PDF**) to make it clearer, and elaborate the process of getting Class-aware Vote BEV as follows:
1. We convert the cluster features to the BEV space and generate vanilla vote BEV features F.
2. Since the feature maps are class-agnostic, we merge the classification results (assuming c categories) from the cluster branch into masks { $W_1,W_2, …,W_{c}$ }. The vote BEV feature $F_i$ for each class is then computed using the following equation:
$F_i=W_i \cdot F$.
3. As objects in different classes vary in size, we choose **different kernel sizes according to the classification results of each cluster** (i.e., 1 for pedestrian, 3 for cyclist and 5 for vehicle).
4. Based on the defined kernel sizes, we perform max pooling on each per-class vote BEV feature simultaneously, which generates the diffused vote BEV feature of each class $D_i$.
$D_i=\text{maxpooling}(F_i,\text{kernel size})$
5. Finally, **the generated class-aware vote BEV** { $D_1,D_2, …,D_c$ } are together fed into a fully connected layer, which generates the required class-aware vote BEV.
In the final version, we will accordingly revise Section 3.3 to provide a more detailed and intuitive explanation.
> *lack of discussion of Model Efficiency.*
Thank you for the suggestion. We present the comparison of model efficiency between our CluB and the BEV-only baseline in terms of latency and FLOPs accordingly. The results are obtained on Waymo Open Dataset with one NVIDIA 3090 GPU.
The computational overhead of CluB is 1.2 / 1.3 times that of the BEV-only baseline in terms of latency / FLOPs. The cost is affordable since the auxiliary cluster branch is built with fully sparse operations[1]. Detailed comparison is shown in the following table. We will add the experimental results and analysis in the future revision. Thanks again for your advice.
| | Baseline | CluB |
|--------------|-------------------|--------|
| Latency (ms) | 144 | 167 |
| FLOPs (G) | 95.71 | 112.35 |
> *It seems like that the convolution operation performed on feature maps is also like an implicit clustering method that aggregates the surrounding features. Is it necessary to make such big effort to introduce a new cluster branch?*
We agree that the convolution operation can aggregate the surrounding features. However, the filters of convolution have fixed kernel sizes, which means the receptive field might not adapt optimally to objects with different scales and aspect ratios. By contrast, our cluster-based branch groups points (voxels) into clusters with a flexible receptive field. Besides, the center voting operation is leveraged in the cluster branch, which further benefits introducing the object-centric feature.
> *If you want to alleviate the phenomenon that a stack of convolution layers will weaken the capability of presenting an object with the center point and reduce structure information, how about using deformable convolution?*
Thanks for your valuable comments. We agree that it is feasible to exploit deformable convolution for extracting center features, but the performance by leveraging it is unknown to the point cloud domain. To verify it, we conducted an additional experiment. Specifically, we replace all the 2D convolution layers in the BEV backbone with deformable convolution layers. Compared with the baseline (60.5%), this approach leads to inferior detection performance (20.6%). Accordingly, although deformable convolution has been proven to be effective for learning the deformable shape and scale, it barely works when the object center is empty. We will add the experiment and analysis in the revision.
[1] Lue Fan et al., Fully Sparse 3D Object Detection, NeurIPS 2022. | null | null | null | null | null | null |
Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians | Accept (poster) | Summary: The authors propose a novel approach called "gradient flossing" to tackle the instability of gradients in the training of recurrent neural networks (RNNs). The process of pushing the Lyapunov exponents toward zero is referred to as "flossing" the gradients. This stabilizes the gradients and improves network training.
Strengths: + The authors show improvements in several toy sequence learning tasks: Delayed Copy Task, Temporal XOR Task, etc.
+ Idea of using Lyapunov exponents towards zero to mitigate vanishing and exploding gradients is well justified and works as expected on toy tasks.
Weaknesses: The lack of any experiments on non-simulated data is problematic, and the explanation given by the authors is not convincing. The paper's authors explicitly state that they deliberately did not use sequential MNIST or similar non-toy tasks commonly used to probe exploding/vanishing gradients.
> We deliberately do not use sequential MNIST or similar toy tasks commonly used to probe exploding/vanishing gradients, because we want a task where the structure of long-range dependencies in the data is transparent and can be varied ad libitum
While toy tasks are more transparent, real-world data is important to understand the proposed methods' potential failure modes and expected characteristics in *uncontrolled* settings. Avoiding running experiments on the basis that the results are more difficult to interpret than simulated data is not an acceptable excuse. Transparency in the work itself is hindered when experiments are consciously avoided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors comment that such a technique of pushing the Lyapunov exponents to zero could limit the capacity to learn certain tasks? Maybe enabling the network to amplify or forget things is important for some task contexts, which has been a motivation for specialized architectures like the LSTM.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Experimental limitation mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for your thoughtful review and feedback on our manuscript. We appreciate the acknowledgment and recognition of our novel approach involving the use of Lyapunov exponents towards zero to address the challenges of vanishing and exploding gradients in RNNs.
> While toy tasks are more transparent, real-world data is important to understand the proposed methods' potential failure modes and expected characteristics in uncontrolled settings. Avoiding running experiments on the basis that the results are more difficult to interpret than simulated data is not an acceptable excuse. Transparency in the work itself is hindered when experiments are consciously avoided.
We recognize the importance of assessing our method on real-world datasets and thank the reviewer for emphasizing this aspect. Our rationale behind beginning with synthetic toy tasks was to "open the black box" and comprehensively understand the tangent space structure of gradient descent from a dynamical systems perspective and how it is improved by gradient flossing. This rigorous examination has allowed us to delve into not just vanishing and exploding gradients, but also:
- Observe how the complexity of tasks impacts the number of required Lyapunov exponents to be flossed, as showcased in Supplementary figure 2.
- Study how gradient flossing enhances the norm of the long-term Jacobian (Supplementary figure 7).
- Establish a relationship between the training epoch, where accuracy exceeds the chance level, and the increase in the gradient norm (Supplementary figure 8).
- Analyze the positive effects of gradient flossing on the condition number of the recurrent weight gradient (Figure 2 and supplementary figure 8).
- Formulate a mathematical correlation between the post-gradient-flossed Lyapunov exponents and task complexity, quantified by the delay \(d\) (Supplementary figure 9).
To add, our main contribution hinges on the connection between the condition number of the long-term Jacobian and the Lyapunov spectrum. Our work, especially in the section 5 on "the condition number of the Long-Term Jacobian" (Figure 2), provides novel insights into the dimensionality of the gradient update, offering an analytical perspective on the nature of signal propagation backward in time during RNN training.
> Can the authors comment that such a technique of pushing the Lyapunov exponents to zero could limit the capacity to learn certain tasks? Maybe enabling the network to amplify or forget things is important for some task contexts, which has been a motivation for specialized architectures like the LSTM.
You've raised an excellent point. Our analysis in Supplementary figure 11 demonstrates potential conflicts of gradient flossing with certain task goals, where persistent gradient flossing over multiple Lyapunov exponents may inhibit the RNN's nonlinearity. We've addressed this challenge by limiting gradient flossing steps and subsequently focusing on task loss minimization without gradient flossing. For architectures like LSTM, this scenario could differ, as their latent variables can effectively carry gradient information over extended time frames, even amidst active nonlinearities on \(h\).
In conclusion, we deeply value the feedback and are keen to explore the applicability and nuances of our gradient flossing technique on real-world datasets beyond toy tasks, to make our contribution more comprehensive and robust in the field of RNN training.
---
Rebuttal Comment 1.1:
Comment: Having reviewed the rebuttals of the authors to my questions and weaknesses and also reviewing rebuttals to other reviewers, I will keep my score. There is still the importance of demonstrating this method on data that the overall research community is familiar with, and the authors do not seem willing to demonstrate this while still claiming to solve a well-known problem in the community of vanishing/exploding gradients. I still consider this a severe weakness of this work.
The request is not to see this method as performant or superior to other work. It is to understand the proposed methods' potential failure modes and expected characteristics in uncontrolled settings.
---
Reply to Comment 1.1.1:
Title: Addressing Reviewer Concerns: Failure modes, Optimized GradientFlossing Methodology and sequentialMNIST
Comment: We thank the reviewer for their feedback and the emphasis on understanding the potential failure modes and expected characteristics of GradientFlossing in uncontrolled settings.
1. **Addressing Real-World Data Concerns**: We acknowledge the importance of evaluating GradientFlossing on datasets familiar to the research community. We are currently undergoing numerical evaluations using SequentialMNIST, with preliminary results appearing promising. These findings will be included in the camera-ready version of the manuscript.
2. **Mitigating Potential Failure Modes**: The potential conflict of persistent gradient flossing with certain task goals is a concern we have also observed (as depicted in Supplementary Figure 11). However, we have now devised an automatic solution to this challenge. By simultaneously conducting gradient flossing and training, we employ a combined loss function:
$L_{total} = L_{task} + \alpha L_{gradientflossing}$
Crucially, the strength of gradient flossing, represented by $\alpha$, is trained alongside the network, dynamically adjusting its value throughout the learning process. This approach ensures that gradient flossing is optimized in relation to the task goal, effectively circumventing the described failure mode. We are also exploring a similar automated strategy to determine the optimal number of flossed Lyapunov exponents.
3. **Further Investigations and Future Work**: Exploding and vanishing gradients remain a prevalent issue in RNNs, and our GradientFlossing technique is also theoretically applicable to feedforward networks, offering a promising avenue for future research. While there exist other methods and architectures addressing these challenges, we believe GradientFlossing presents a complementary approach. Preliminary experiments also indicate its potential in further enhancing LSTM performance on tasks with long time-horizons to bridge.
In conclusion, we are dedicated to ensuring our contribution is well-understood, robust, and applicable to real-world scenarios. We appreciate the reviewer's insights, which drive our ongoing efforts to enhance and clarify the impact and scope of GradientFlossing. | Summary: The authors propose a new technique for handling gradient instability (vanishing/exploding) in neural network models, especially sequence models such as RNNs. Specifically, by leveraging results from recent works that establish a connection between Lyapunov growth exponents (from dynamical systems) and the singular values of the model-Jacobian itself, the authors propose a (regularization) approach that they coin ‘gradient flossing’ to control the evolution of these singular values desirably. They demonstrate that, as a result, the gradient signal remains stable, thereby allowing for successful training of the model itself.
Additionally, the authors provide insights on the steps (schedule, pre/during training) to perform such a regularization leading to optimal gradient stabilization & computational benefits.
Strengths: The authors visit a longstanding problem in training NN models, esp. sequence models with long temporal dependencies. By leveraging recently established connections between Lyapunov exponents and the singular values of the (long-term) Jacobian, the authors propose a new technique in the form of:
- Defining a loss function out of these Lyapunov exponents
- Determine a clever strategy to evaluate this loss function through a QR decomposition
to constrain these exponents directly to stable target values. This, in turn, allows control over the evolution of the Jacobian spectra itself.
The authors provide experimental evidence supporting this technique on two specific tasks that, by design, require long-time gradient signals to propagate efficiently.
Furthermore, the authors provide practical insights into the (expensive) computational needs of the exact implementation of such a scheme and the place (pre/during training) of the same. They then propose a modified, intermittent regularization routine with preserved benefits yet having lower compute necessities.
The authors do an overall good job of satisfactorily articulating the problem statement, the proposed solution, observed benefits, and implementation limitations.
Weaknesses: While I believe this is a good contribution, I feel the work can undergo a fair amount of extra polishing in terms of clarity. Please find below (Questions) my comments/questions on areas where this work can be further improved.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Eq. (2) does not necessarily need to refer to the ‘long-term Jacobian’ unless $\Delta{t} \equiv t - \tau \to \infty$. That should be clarified.
- Is the loss function from Eq. (4) used as an additional or the only loss to train?
- Eq. (4) does not seem to contain a reference to the target $\lambda$ values that the authors eventually aim to constrain them to
- While the analytical $t\to\infty$ limit is understandable, can you provide an estimate of a reasonable sequence length for *practical data* that can serve as a finite size equivalent of this limit?
- Is Eq. (5) subject to the assumption that parameter updates are gradient-based?
- I believe the ’s’-index in Eq. (6) is supposed to run from $\tau$ to be exact (?)
- Clearly, in arriving at Eq. (6), the $Q$’s from *subsequent* time-steps $s$ and $s+1$ act on each other to reduce to unity ($Q_{s+1}^T Q_{s}$), leaving with the $R$’s only. Can the authors explain why that might be the case in general? Is there an assumption about a small learning rate here?
- Since the authors claim the QR decomposition need not be applied at every time step, will the equivalent Eq. (6) thus arrived at still remain to be factually correct? The expression of $T_t$ contains an iterative product of $ D_s$, of which if one chooses to QR decompose a handful only, then that does not seem to yield $\prod_s R^s$ for some period $t_{ons}$.
- In Lines 140-141, you refer to using vanilla gradient descent for the experiments of Fig. 1 when you specify in Lines 132-133 that experiments were performed using ADAM.
- Isn’t a target $\lambda$ value of -1 indicative of vanishing gradients as per Eq. (3)? How is the model still reliably attaining that value regardless, then?
- In Fig. 2 (A), the ‘theory’ and ‘simulation’ legends seem to be interchanged.
- In Fig. 2 (A) & (C), the numerical and theoretical results should overlap in the long time-step regime, yet as time increases, the two appear to diverge in the former and maintain a uniform scaling (~10^2) in the latter. Can you explain why?
- Can you explain what you mean by ‘flossing before training’? That does not seem to be well clarified in the text.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: To the best of my knowledge, the authors act commendably in attributing to the various limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their constructive feedback which has greatly contributed to improving the quality of our manuscript.
> Eq. (2) does not necessarily need to refer to the ‘long-term Jacobian’ unless $\Delta t \equiv t-\tau\rightarrow \infty$. That should be clarified.
We have updated the manuscript to only refer to $T$ as the long-term Jacobian when $t-\tau$ is long, thereby clarifying this point.
> Is the loss function from Eq. (4) used as an additional or the only loss to train?
We've clarified that while our current implementation utilizes the gradient flossing loss from Eq. (4) as the sole loss during its epochs, it can also be combined with a task-specific loss for computational efficiency.
> Eq. (4) does not seem to contain a reference to the target $\lambda$ values that the authors eventually aim to constrain them to.
Correct, we here set the target Lyapunov exponents to 0 and thus directly minimize the square of the Lyapunov exponents. It would be an exciting future direction to explore if other target values for the Lyapunov exponents can further improve trainability. For instance, in some tasks, one might want to have a certain number of contracting or growing directions.
> While the analytical limit $t\to\infty$ is understandable, can you provide an estimate of a reasonable sequence length for practical data that can serve as a finite size equivalent of this limit?
Thank you for pointing this out. We added an extra figure to demonstrate how quickly the Lyapunov spectrum convergences with the number of time steps. We found that a relative error of 1% is already achieved after less than 100 steps. We note that even shorter sequence lengths improve training. We suspect that the sequence length should be on the order of the longest time horizon of that task that has to be bridged by backpropagation through time.
> Is Eq. (5) subject to the assumption that parameter updates are gradient-based?
Eq. 5 is a general expression, and there's no assumption regarding the parameter update mechanism therein.
> I believe the ’s’-index in Eq. (6) is supposed to run from $\tau$ to be exact (?)
We are not exactly sure what this comment refers to. In the expression for the Lyapunov exponent,
$\lambda_{i}=\lim_{t\to\infty}\frac{1}{t}\log\prod_{s=1}^{t}R_{ii}^{s}=\lim_{t\to\infty}\frac{1}{t}\sum_{s=1}^{t}\log R_{ii}^{s}.$
the index $s$ correctly runs from s=1 to t, but one could equivalently write
$\lambda_{i}=\lim_{(t-\tau)\to\infty}\frac{1}{t}\log\prod_{s=\tau}^{t}R_{ii}^{s}=\lim_{(t-\tau)\to\infty}\frac{1}{t-\tau}\sum_{s=\tau}^{t}\log R_{ii}^{s}.$
> Clearly, in arriving at Eq. (6), the $Q$’s from subsequent time-steps $s$ and $s+1$ act on each other to reduce to unity ($Q_{s+t}^T Q_{s}$), leaving with the $R$’s only. Can the authors explain why that might be the case in general? Is there an assumption about a small learning rate here?
No, there is no assumption of small learning rates here. We apologize in case this was confusing, but $s$ is the time index, and learning happens across epochs.
> Since the authors claim the QR decomposition need not be applied at every time step, will the equivalent Eq. (6) thus arrived at still remain to be factually correct? The expression of $T_t$ contains an iterative product of $D_s$, of which if one chooses to QR decompose a handful only, then that does not seem to yield $\prod_s R^s$ for some period .
Yes, Eq. (6) remains mathematically correct even for infrequent QR decomposition. If one chooses to QR decompose less often, the resulting (fewer) R will big bigger:
More explicitly the result of
$D_2 D_1 = D_2 Q_1 R_1 = Q_2 R_2 R_1$
and
$D_2 D_1 = \tilde Q_2 \tilde R_2$ is identical (as QR-decomposition is unique when constraining the signs of the diagonal of $R$ to be positive), thus $\tilde R_2 = R_2 R_1$.
> In Lines 140-141, you refer to using vanilla gradient descent for the experiments of Fig. 1 when you specify in Lines 132-133 that experiments were performed using ADAM.
Our apologies for the inconsistency. We've revised the manuscript to clarify that ADAM was the optimizer used in all our experiments.
> Isn’t a target value of -1 indicative of vanishing gradients as per Eq. (3)? How is the model still reliably attaining that value regardless, then?
You're right. A Lyapunov exponent of -1 does suggest vanishing gradients. However, we utilized the QR-algorithm to achieve this target, which inherently avoids the issue of vanishing gradients.
> In Fig. 2 (A), the ‘theory’ and ‘simulation’ legends seem to be interchanged.
This oversight has been addressed in the revised manuscript.
> In Fig. 2 (A) & (C), the numerical and theoretical results should overlap in the long time-step regime, yet as time increases, the two appear to diverge in the former and maintain a uniform scaling (~10^2) in the latter. Can you explain why?
Thanks for pointing this out. We've thoroughly investigated the differences and found the cause for the mismatches. For Figure 2A, discarding initial transients and increasing numerical precision resolves the discrepancy. In Figure 2C, there's no actual mismatch; the x-ticks and y-ticks were differently set, and we have updated the figure to rectify this.
> Can you explain what you mean by ‘flossing before training’? That does not seem to be well clarified in the text.
We meant that our approach first involves several gradient flossing steps, after which the main task training begins without additional gradient flossing. This point has been elaborated upon for clarity in the revised manuscript.
Thank you once again for your valuable feedback. We believe these revisions have significantly strengthened our submission.
---
Rebuttal Comment 1.1:
Title: Reviewer questions addressed?
Comment: As the discussion period draws to a close, we'd like to inquire if our responses have adequately addressed the reviewer's concerns.
For clarity, here's a summary of our main updates:
1. Elucidated when $T$ is considered the long-term Jacobian.
2. Defined the application of the gradient flossing loss from Eq. (4).
3. Noted that the target Lyapunov exponents in Eq. (4) are set to $0$.
4. Showcased the Lyapunov spectrum's convergence over time steps.
5. Explained the context and assumptions of Eqs. (5) and (6).
6. Delved into the nuances of infrequent QR decomposition.
7. Rectified inconsistencies in optimization methods and figure descriptions.
8. Addressed discrepancies in Figure 2.
9. Detailed the 'flossing before training' approach.
We trust that our clarifications meet the expectations set by the review. If there are any outstanding concerns, we're more than willing to provide further clarity. Otherwise, we extend our gratitude to the reviewer for their invaluable feedback, which has undeniably enhanced the quality of our manuscript. | Summary: Authors propose gradient flossing for RNNs, which adds an additional regularization term that keeps the sum of Lyapunov exponents close to zero. This essentially encourages the singular values of the long-term Jacobian to be close to 1, hence addressing the vanishing/exploding gradient problem in RNNs. The authors point out that minimizing this regularization term again has the same gradient instability problem, and they purpose to apply QR decomposition once in a while to avoid the ill-conditioned long-term Jacobian pass. Finally, the authors show gradient flossing achieves faster convergence and a higher success rate on simple long-term tasks up to 500 steps.
Strengths: * The idea of regularizing the Lyapunov exponent and using QR decomposition is novel and interesting.
* The problem is well-motivated.
Weaknesses: * Empirical evaluations are toy. Did not include results on standard long-time horizon benchmarks like Longe Range Arena [1].
* The paper could have studied gradient flossing with architectures that are more suited for long horizon tasks such as state space models [2] and LRUs [3], instead of Vanilla RNNs and LSTMs.
* Authors should compare to more recent methods as baselines such as HiPPO Matrices[2] and initialization methods used in LRU [3]. Those methods tend to be more competitive than a simple orthogonal initialization.
* Clarity of the paper can be improved. For example, $\tau$ in equation 1 is never defined, and equation 4 could be better explained. In addition, I did not quite understand the technical parts of the paper explaining how equation 4 is calculated in practice (see questions below).
[1] Tay, Yi, et al. "Long range arena: A benchmark for efficient transformers." arXiv preprint arXiv:2011.04006 (2020).
[2] Gu, Albert, Karan Goel, and Christopher Ré. "Efficiently modelling long sequences with structured state spaces." arXiv preprint arXiv:2111.00396 (2021).
[3] Orvieto, Antonio, et al. "Resurrecting recurrent neural networks for long sequences." arXiv preprint arXiv:2303.06349 (2023).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors:
* I’m still unclear how QR decomposition is able to avoid the long-term Jacobian being ill-conditioned.
* Why can we not use QR decomposition directly on the original RNN objective in equation 1 to avoid the long-term gradient instability problem?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate reviewer ZKKk for their thorough and insightful review of our manuscript. Your constructive feedback is invaluable, and we are committed to addressing each concern raised to enhance the quality and clarity of our work.
> I’m still unclear how QR decomposition is able to avoid the long-term Jacobian being ill-conditioned.
To provide a more intuitive explanation before diving into technicalities: Imagine the Jacobians as a series of transformations of small perturbations that can stretch or compress the space along the trajectory of the RNN. Over time, when many such transformations are multiplied, they can cause vectors to become too aligned, leading to numerical instability. The QR decomposition offers a mathematically exact way to correct for this by ensuring that the space remains 'evenly spread' after each transformation.
Now the technicalities: The core idea is that each individual Jacobian $D_s$ is not ill-conditioned, but their product $T_t(h_\tau)$ is. Using QR decomposition, one can iteratively decompose the product of Jacobians sufficiently often such that the product is not ill-conditioned [1,2].
To illustrate this geometrically, consider the evolution of initially random vectors $v^i$ and $v^j$ under the influence of the product of Jacobians $T_t(h_\tau)$. As $t-\tau$ is increased, the angle between $T_t(h_\tau) v^i$ and $T_t(h_\tau) v^j$ become too small for numerical computation. This problem can be circumvented through the iterative application of QR decomposition, which repositions the vectors into orthogonal subspaces, thereby maintaining numerical stability [3].
To further illustrate this point dynamically for the forward pass of the calculation of Lyapunov exponents: Almost all perturbations of the initial recurrent network state $h$ will align over time with the fastest diverging (or slowest converging) tangent space direction, which is the first covariant Lyapunov vector [2]. The QR decomposition of a volume element in the tangent space will express the volume in terms of an orthonormal matrix $Q$ and an upper triangular matrix $R$, where the $Q$ matrix will converge to a unique basis when constraining the diagonal elements of $R$ to be positive [1,4].
In summary, QR decomposition helps by iteratively repositioning vectors into orthogonal subspaces, thereby ensuring that numerical calculations remain accurate. The mechanics of this process align with the behavior of the Lyapunov exponents, as detailed in our earlier explanation.
> Why can we not use QR decomposition directly on the original RNN objective in equation 1 to avoid the long-term gradient instability problem?
Great suggestion! In principle, one could implement the backward pass of backpropagation through time by hand and iteratively apply QR decomposition on the long-term Jacobians, directly regularizing the diagonal of the $R$ matrix. However, for gradient flossing, we chose here the much simpler solution of calculating the logarithm of the singular values of the long-term Jacobian in the forward pass numerically robustly and regularizing them using automatic differentiation. This has the advantage, that automatic differentiation takes care of the backward pass, which makes it easy to use gradient flossing, as only the forward pass has to be implemented. We will mention the elegant (and possibly more efficient) idea by the referee in the outlook of the manuscript.
We are genuinely grateful for the feedback, and we believe that addressing these points will significantly elevate the quality and impact of our manuscript. We hope that our revisions will meet your expectations and further advance the field's understanding of gradient flossing and its applications in RNNs. We are looking forward to answer follow-up questions and answer further comments.
[1] K. Geist, U. Parlitz, and W. Lauterborn, Comparison of Different Methods for Computing Lyapunov Exponents, Prog. Theor. Phys. 83, 875 (1990).
[2] A. Pikovsky and A. Politi, Lyapunov Exponents: A Tool to Explore Complex Dynamics (Cambridge University Press, Cambridge, 2016).
[3] G. Benettin, L. Galgani, A. Giorgilli, and J.-M. Strelcyn, Lyapunov Characteristic Exponents for Smooth Dynamical Systems and for Hamiltonian Systems - A Method for Computing All of Them. I - Theory. II - Numerical Application, Meccanica 15, 9 (1980).
[4] S. V. Ershov and A. B. Potapov, On the Concept of Stationary Lyapunov Basis, Physica D: Nonlinear Phenomena 118, 167 (1998).
---
Rebuttal Comment 1.1:
Title: Thank you authors for the rebuttal
Comment: Thank you authors' for the detailed response, as well as the more intuitive explanation for my clarification questions. I really appreciate it.
I am still inclined to retain my rating because the authors did not touch upon the weaknesses I raised in the original review. In addition to agreeing with Reviewer 5Z2M's point on the method should be evaluated on standard benchmarks to investigate its failure modes, it should also be evaluated against more competitive baselines than simple vanilla RNNs/LSTMs (as suggested in the weakness section of the review).
---
Reply to Comment 1.1.1:
Title: Adressing weaknesses
Comment: We thank the reviewer for their feedback, both the constructive critique and the recognition that "The idea of regularizing the Lyapunov exponent and using QR decomposition is novel and interesting" and that "The problem is well-motivated."
We apologize for not thoroughly addressing the weaknesses in our initial rebuttal.
* **Standard Benchmarks**: Evaluations on SequentialMNIST are underway; results will be in the final manuscript.
* **Competitive Baselines**: We're extending evaluations to advanced architectures and recent methods such as HiPPO Matrices and LRU initialization methods.
* **Clarity**: We will clarify definitions and explanations in the manuscript, especially for equations 1 and 4.
* **Mitigating Failure Modes**: Our manuscript has been updated to emphasize a combined loss function:
$L_{\text{total}} = L_{\text{task}} + \alpha L_{\text{gradientflossing}}$
Here, the strength of gradient flossing, represented by $\alpha$, is trained alongside the network with backpropagation, dynamically adjusting its value throughout the learning process. This approach ensures that gradient flossing is optimized in relation to the task goal, effectively circumventing the described failure mode. We are also exploring a similar automated strategy to determine the optimal number of flossed Lyapunov exponents.
* **References and Evaluations**: New references have been incorporated, and additional results will be shared on our anonymous GitHub repository.
We are committed to enhancing our work based on your valuable insights. | Summary: The presented paper proposes a new method for tackling numerical instabilities during training for recurrent neural networks.
The proposed method exploits a theoretical link between Lyapunov exponents and the singular values of the long-term Jacobian.
The set of experiments is well-chosen to showcase the extent to which gradient flossing helps stabilize training for tasks involving long-range dependencies.
There is an in-depth coverage of the practical aspects of the methods in the appendix, which is very useful for both machine learning practitioners and researchers.
By steering the Lyapunov exponents to 0, either before and/or during training, the authors are able to show consistent improvements on a variety of synthetic learning tasks involving long-range dependencies in temporal sequences.
Strengths: - There are experiments validating basic hypotheses, such as the fact that the proposed algorithm is able to control the Lyapunov exponents, and that estimating the Lyapunov exponents via the QR decomposition matches the theory.
- Practical aspects such as when to perform gradient flossing, how often should it be used during training, and also how often should the QR decomposition should be computed when unrolling the neural network dynamics.
Weaknesses: - There are no evaluation on real world complex benchmark to estimate the impact of the proposed contribution. However, the paper is already quite dense.
- Minor remark: I don't understand the "the finite network size fluctuations of the Lyapunov exponents" in figure 1.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - (line 197 - 201) How does fig 2.C shows that the estimated condition number based on Lyapunov exponents can predict differences in condition number originating from finite network size $N$ ?
The authors refer to appendix F for this matter, but it doesn't seem related to this claim.
- Can the author explain in more details what are the "the finite network size fluctuations of the Lyapunov exponents" in figure 1 ?
- Figure 11 in the appendix shows that too much gradient flossing during training can be detrimental, which means that the local minima of the gradient flossing procedure hardly coincides with the one corresponding to the actual learning task. It is also said in the introduction that appropriate initialization schemes can ensure well-behaved gradient at initialization, a property that can be lost during training. I suspect that the gradient flossing procedure prevents the weights from falling into an "ill-conditioned" region. Thus, how often should gradient flossing be used should depend on, given a fixed learning rate, how many iterations are needed to go from a "well-behave" region to an ill-condition one. Can the author elaborate on this specific intuition ?
I am asking this question because I suspect that you already found regimes where the regularization objective conflicts with the actual learning task.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: As emphasized by the author, the Lyapunov exponent characterizes the singularity of the long-term Jacobian, and thus cannot characterize local-in-time behavior.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the insightful feedback provided by the reviewer. Your comments and concerns have not only broadened our perspective but also greatly aided in refining our manuscript to a higher standard. Specifically, we acknowledge the importance of evaluating gradient flossing on real-world complex benchmarks and are currently in the process of addressing this. Further, we have included additional details and clarifications based on your feedback, particularly concerning the "finite network size fluctuations of the Lyapunov exponents".
Here are our more detailed responses to the specific questions:
> (line 197 - 201) How does Fig 2.C shows that the estimated condition number based on Lyapunov exponents can predict differences in condition number originating from finite network size $N$? The authors refer to Appendix F for this matter, but it doesn't seem related to this claim.
Figure 2C shows that different network realizations of finite size (N=80) lead to different condition numbers $\kappa_2$ (for instance, different $\kappa_2$ among green dots). We note that these dots lie close to the diagonal, indicating that the differences in $\kappa_2$ coming from different network realizations are well-captured by the theoretical prediction on the y-axis.
> Can the author explain in more detail what are "the finite network size fluctuations of the Lyapunov exponents" in Figure 1 ?
Absolutely. For finite network size $N$, different realizations of the random recurrent weight matrix $\mathbf{W}$ will lead to slightly different Lyapunov exponents. Such differences are expected to vanish for large $N$. We added a new supplementary figure to demonstrate that. We note that parts of the variations among Lyapunov spectra might also come from the interplay of the different network realizations and the dynamics of gradient flossing. We added this note to the updated manuscript.
> Figure 11 in the appendix shows that too much gradient flossing during training can be detrimental, which means that the local minima of the gradient flossing procedure hardly coincides with the one corresponding to the actual learning task. It is also said in the introduction that appropriate initialization schemes can ensure a well-behaved gradient at initialization, a property that can be lost during training. I suspect that the gradient flossing procedure prevents the weights from falling into an "ill-conditioned" region. Thus, how often should gradient flossing be used should depend on, given a fixed learning rate, how many iterations are needed to go from a "well-behaved" region to an ill-condition one. Can the author elaborate on this specific intuition? I am asking this question because I suspect that you already found regimes where the regularization objective conflicts with the actual learning task.
A thoughtful observation! Continued gradient flossing can indeed interfere with the actual learning task, as evident in Supplementary figure 11. We agree that the optimal gradient flossing schedule should depend on how many iterations are necessary to go from ill-conditioned to a well-behaved region in parameter space. After submission, we found that instead of having a fixed number of gradient flossing steps, "adaptive gradient flossing", that automatically stops once the Lyapunov exponents are close to zero is even better. It is conceivable that refining this technique might depend on the specific task, and further improvements are being explored.
To conclude, we sincerely thank you for your in-depth review, and assure you of our commitment to enhancing the quality of our work based on the valuable feedback received. We believe that the integration of gradient flossing will mark a significant advancement in the domain, and are keen to contribute effectively to the community's understanding.
---
Rebuttal Comment 1.1:
Title: Satisfied by the answer
Comment: Dear authors,
I apologize for the late response, but I did read your answer.
I am satisfied with it.
Likewise, I will keep my score as is, and I encourage the authors to pursue their work with further validation on real-world scenarios.
I indeed think that understanding the possible mode of failure in an uncontrolled scenario would be very informative for practitioners, as pointed out by other reviewers.
I would however like to point out that I highly appreciated the motivation behind the use of synthetic tasks.
---
Reply to Comment 1.1.1:
Title: Further improvements to GradientFlossing
Comment: We thank the reviewer for their positive feedback and the emphasis on understanding the potential failure modes and expected characteristics of GradientFlossing in uncontrolled settings. We'd like to highlight key updates to our manuscript:
* **Mitigating Failure Modes**: We appreciate the reviewer's concerns about potential failure modes. Our recent advancements, as highlighted, involve a combined loss function:
$L_{total} = L_{task} + \alpha L_{gradientflossing}$
With the parameter $\alpha$ also trained using backpropagraion, this approach aims to strike a balance, ensuring gradient flossing optimally complements the primary task goal.
* **Evaluations of GradientFlossing beyond Synthetic Tasks**: We acknowledge the significance of real-world scenario validation. As mentioned, evaluations using SequentialMNIST are in progress, preliminary results look promising, and the results will be integrated into the final manuscript.
* **Synthetic Tasks**: We are grateful for the reviewer's recognition of our motivation behind using synthetic tasks. They serve as controlled environments to rigorously test our method's fundamentals.
We thank the reviewer again to dedicate their time and effort towards this submission. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and valuable feedback on our manuscript, "Gradient Flossing: Improving Gradient Descent through Dynamic Control of Jacobians". We have carefully addressed each of the reviewers' comments in the subsequent sections. Here, we provide a concise summary to address the core critique raised by most reviewers.
We have taken note of the primary concerns raised, notably the application of our method to synthetic tasks instead of real-world tasks and the lack of comparison with other advanced architectures. We would like to highlight that the main objective of our manuscript was to bridge the understanding between the condition number of the long-term Jacobian and the Lyapunov spectrum, and "opening the black box" of gradient flossing in controlled synthetic tasks allowed us to rigorously investigate the tangent space structure. We utilized simple toy tasks because they allow a granular understanding by controlling every aspect of the problem, a strategy that has previously been employed effectively in foundational neural network research.
To address specific concerns raised and to provide a clearer picture of our results and methodology, we have included an additional page of figures:
* **Updated Figure 2** provides a clearer representation of how gradient flossing reduces the condition number of the long-term Jacobian and improves numerical accuracy. We have also corrected an error in the legend.
* A new plot showcases the **convergence of the Lyapunov exponents** over time steps. With 20 distinct network realizations, it is evident that convergence is reached swiftly, within 100 time steps.
* We demonstrate how gradient flossing enhances the **dimensionality of the error gradient** of the recurrent weights, as calculated based on the SVD of these error gradients.
That being said, we acknowledge the significance of applying our findings to real-world tasks. Building on the foundational understanding we've achieved through synthetic tasks, we are poised to extend gradient flossing to more realistic tasks and diverse architectures in our subsequent research.
Now, let us address the specific points raised by each reviewer, including additional figures that further support our methodology and findings.
Pdf: /pdf/5fbd49afc0d44296c85481727b4a99e60aac3269.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation | Accept (oral) | Summary: This paper approaches the task of predicting optical flow from a pair of images and depth from a single image. It proposes to do so using diffusion models, providing a training pipeline which includes contributions to deal with noisy training data. The proposed method is competitive with SOTA in depth prediction on NYU and KITTI, is similar or better to SOTA in optical flow prediction on Sintel and KITTI zero-shot, and is SOTA in optical flow after finetuning on KITTI. Additional experiments show the proposed pretraining pipeline meaningfully improves RAFT and the proposed model, that contributions dealing with noisy training data are helpful, and that the proposed method can predict multimodal outputs in cases of uncertainty.
Edit: Thanks to the authors for the rebuttal.
After reading the rebuttal and other reviews I will keep my rating at 7 - accept. I believe this paper should be accepted because it (1) introduces a simple but interesting method that performs SOTA on competitive benchmarks and (2) is very polished and experiments clearly defend all main contributions of the method.
Strengths: The paper provides a simple method that is similar-to or better than SOTA on competitive optical flow and depth tasks
- A new data pipeline is introduced which significantly boosts performance of prior SOTA e.g. RAFT, while improving further with the proposed method
- The method is built off a standard image-to-image diffusion method Palette, adds minimal detail needed to handle flow and depth data (e.g. L1 loss, infilling + unrolling, coarse-to-fine), and is competitive across a wide variety of experiments
- Real data is tricky to handle for diffusion models as it can contain incomplete flow and depth maps. The proposed method has an interesting idea of using its own predictions late in training. Combined with infilling, this is helpful.
The paper is very polished and experiments clearly defend all main contributions of the method.
- Writing is very clear, figures and tables are attractive and helpful
- Choice of training data is very important (Figure 3, Table 6, 7); it even improves RAFT meaningfully (Table 1)
- The method can also produce Multimodal samples faced with ambiguity e.g. transparent/translucent/reflective (Figure 1, 4, 5)
- Figure 6 and Table 5 shows the importance of coarse-to-fine refinement, which enables the method to produce better detail than RAFT (Figure 4, 5)
- Infilling + step-unrolling yields massive improvement on KITTI (Table 4)
- L1 loss (Supp Table 4), pretraining for depth (Supp Table 5, 6)
- Zero-shot depth completion is a cool application of the method
Weaknesses: Finetuning performance on real data yields weaker performance on Sintel vs. other methods (minor weakness, but addressing could further strengthen paper!)
- On Sintel (Table 2), most other methods use warm start, i.e. initializing flow prediction from previous frame. Is it possible to do this with diffusion, i.e. starting from a previous prediction, perhaps denoising fewer steps? If not, this is an important weakness of diffusion models in this setting
- Given most other method warm-start, it is harder to analyze the ability of the proposed method to finetune on real data. Perhaps it does not do as well relative to FlowFormer because (1) it does not handle real data as well as regression methods, even after contributions in this area, or (2) it does not overfit as well to specific datasets given others tend to have task-specific architecture. A comparison to more methods without warm-start, or by using the proposed method with warm-start, it would be very interesting to see this analyzed.
The main novel contribution of this paper beyond data pipeline is step-unrolled denoising diffusion training, which yields modest improvement over infilling alone (minor weakness)
- L1 loss is a design decision, infilling holes using bilinear interpolation is not a substantial contribution in my eyes. So the remaining technical contribution is unrolling.
- I understand the contributions are deliberately simple, which is a positive given performance gain. However, it is still important to analyze the reasons for the method’s success. In this case, the most novel component, “unroll”, improves optical depth REL from 0.077 to 0.075, RMS from 0.338 to 0.324, KITTI REL from 0.057 to 0.056, RMS from 2.744 to 2.700, AEPE from 1.53 to 1.47, and F1-all from 5.24% 4.74%. These contributions are nontrivial, but not substantial on their own.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - L179 is a bit confusing and could be rewritten “Training high resolution diffusion models is often slow and memory intensive but model performance has been shown to improve with resolution”
- Figure 4 (top) is dark and indistinct, making it a hard to see. Are there failure cases for RAFT on more easily readable examples?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below.
> Finetuning performance on real data yields weaker performance on Sintel vs. other methods (minor weakness, but addressing could further strengthen paper!)
In Table 9 of the supplementary material, we show that we outperform FlowFormer on all Sintel test sequences but one, ambush_1. This sequence has significant ambiguity due to an all-white background, large camera and object motions, and the existence of numerous objects that occlude one another and move out of frame. This forces the model to depend heavily on inductive biases and learned motion priors, which could be challenging for our method. It is possible that FlowFormer’s inductive bias (with the cost volume etc) make it better suited to reason about this sequence’s large out-of-frame motions. In general, we agree with the reviewer that our model’s worse performance on Sintel compared to models such as FlowFormer, which we outperform by a large margin on KITTI, is surprising and deserves further exploration.
> Is it possible to do warm start with diffusion
There are a couple of ways of doing warm start with diffusion. One would be to use the warped flow as the initial estimate and perform partial denoising as the reviewer suggests. Another would be to guide the denoising process (similar to classifier guidance [19]) using a loss that ensures cross frame consistency. These are interesting directions, but we have not explored them in detail to date.
> (1) it does not handle real data as well as regression methods, even after contributions in this area, or (2) it does not overfit as well to specific datasets given others tend to have task-specific architecture.
As the reviewer has noted, our model achieves SOTA performance on KITTI (a real dataset) by a large margin (Table 2). Relative to FlowFormer, our model’s weaker test performance on Sintel is somewhat surprising. This may be attributable to the differences in the data used for pre-training the Flowformer model vs our diffusion model, or to the presence of a task-specific bias in the FlowFormer architecture. We agree with the reviewer that this is worthy of future exploration.
> Minor weaknesses: 1) L1 loss is a design decision, infilling holes using bilinear interpolation is not a substantial contribution in my eyes. So the remaining technical contribution is unrolling. 2) These contributions are nontrivial, but not substantial on their own.
We agree that L1 loss and bilinear interpolation are not technical novelties by themselves. The paper’s contribution, in this context, is a way to train diffusion models with noisy and incomplete data, where the train-test distribution shift is problematic. We found that a combination of L1, bilinear infilling and step unrolling are effective for diffusion model training with missing data. This enables one to train generic diffusion models that yield remarkably strong performance on both optical flow and monocular depth estimation without the specialized architectures and loss functions that have been common in SOTA models to date.
> Questions:
Q1: Thank you for the suggestion. We will write this more clearly when revising the paper.
Q2: We have included extra examples in Figure 1 in the attachment. We will include more examples in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Reviewer Response to Rebuttal
Comment: Thanks to the authors for the rebuttal.
After reading the rebuttal and other reviews I will keep my rating at 7 - accept. I believe this paper should be accepted because it (1) introduces a simple but interesting method that performs SOTA on competitive benchmarks and (2) is very polished and experiments clearly defend all main contributions of the method. | Summary: This paper proposes to use diffusion models to solve monocular depth and optical flow estimation tasks. Unlike previous task-specific models for depth and flow, this paper uses a generic diffusion model. This paper studies the effect of training data (synthetic and real) and processing of sparse depth and flow ground truth when training the diffusions models. Experiments are conducted for depth and flow tasks on representative benchmarks, the proposed method achieves state-of-the-art depth performance on NYU and state-of-the-art optical flow performance on KITTI.
Strengths: - **The idea of using diffusion models to solve depth and optical flow tasks is interesting.** Depth and optical flow are typically approached as regression tasks, it's unclear how the popular diffusion models will perform for both tasks. This paper explores this direction and shows some interesting results.
- **Several strategies are proposed to handle the issue of data for training diffusion models.** It's not straightforward to apply diffusion models to depth and flow tasks, the paper proposes data infilling, step-unrolling and an L1 loss to tackle the challenges.
- **The experiments are extensive and informative.** Training data plays a significant role in training diffusion models, this paper studies the effect of different training datasets for both depth and flow tasks. The performance on KITTI for optical flow task is especially strong, outperforming previous 2-frame optical flow methods by a large margin.
- **A detailed discussion of limitations is presented in the supplementary material.** This paper gives a deep analysis of the limited performance on Sintel test set and the results indicate that a particular sequence on the test set severely affects the averaged performance, which might provide some hints for further improvement in future.
Weaknesses: I didn't observe major weakness and would put some minor points to the Questions.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Due to the uncertainty in diffusion models, a same model might produce different results when running twice. How large is the fluctuation and how's the final quantitative results reported? Are the authors using some averaging?
- How sensitive of the proposed architecture to different image resolutions for both depth and flow tasks? For example, what if the inference image resolution is different from training, will the model still perform reasonably?
- The optical flow results on KITTI is very strong, but the results on Sintel seem less robust (as also analysed in the supplementary material). What might cause the different behaviours on Sintel and KITTI, could the authors further comment on this?
- I think one key message from this paper is that the experiments show the importance of training data. When comparing Table 1 and Table 6 for the results of RAFT and the proposed method, we can observe that RAFT outperforms diffusion models when only using AutoFlow for pre-training. However, diffusion models perform better when more datasets are added to the pre-training stage. I am wondering whether this indicates that diffusion models can benefit more from larger datasets than previous regression methods?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the authors have carefully analyzed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below.
> Due to the uncertainty in diffusion models, a same model might produce different results when running twice. How large is the fluctuation and how's the final quantitative results reported? Are the authors using some averaging?
Figure 1, 2, 3, and 4 in the supplemental show multiple samples from the predictive posterior, along with variance heat maps. As visualized in the variance heat maps, the magnitude of local fluctuation depends on the nature of the multi-modality; variability is common in regions of ambiguity (e.g. transparent/reflective surfaces, object boundaries, or occlusions).
We do average multiple samples for both depth and flow (see Section 4.1). Table 3 shows small but consistent improvements in depth estimation as we average more samples. For optical flow, we average 8 samples for the coarse-grained estimate, but we do not use sample averaging in the high resolution refinement. We will clarify this in the final version of the paper.
> How sensitive of the proposed architecture to different image resolutions for both depth and flow tasks? For example, what if the inference image resolution is different from training, will the model still perform reasonably?
Like prior regression methods (Figure 4 of [17]) our model's performance degrades if one naively runs inference at a resolution that is different from the training resolution. However, in the paper we explain how to use the diffusion model within a coarse-to-fine refinement scheme (see Section 3.3). This way we are able to effectively run inference on high resolution images, first at a coarse-resolution, and then patch-wise at high resolution, conditioned on the coarse-grained estimate to provide global context. Table 5 shows the improved performance on optical flow estimation with this approach.
> The optical flow results on KITTI is very strong, but the results on Sintel seem less robust (as also analysed in the supplementary material). What might cause the different behaviours on Sintel and KITTI, could the authors further comment on this?
In Table 9 of the supplementary material, we show that we outperform FlowFormer on all Sintel test sequences but one, ambush_1. This sequence has significant ambiguity due to an all-white background, large camera and object motions, and the existence of numerous objects that occlude one another and move out of frame. This forces the model to depend heavily on inductive biases and learned motion priors, which is challenging for our method. It is possible that FlowFormer’s inductive bias (with the cost volume etc) make it better suited to reason about this sequence’s large out-of-frame motions.
> RAFT outperforms diffusion models when only using AutoFlow
As discussed in Section 3.1 (paragraph 2) and shown in Figure 3, we find that when trained solely on AutoFlow the diffusion model learns to reproduce shapes from the AutoFlow data, yielding poorer qualitative and quantitative performance. The denoiser’s bias toward
polygonal shapes in AutoFlow could partially be explained by recent work (eg section 4.2 of [18]) on shape vs texture bias in neural classifiers. As a result, using AutoFlow alone causes the model to hallucinate on real data and try to identify the best AutoFlow shapes to represent the real objects. Interestingly, this problem is solved by training with larger, more diverse data.
> I am wondering whether this indicates that diffusion models can benefit more from larger datasets than previous regression methods?
This is a great question! The finding that diffusion models benefit more from larger datasets is indeed surprising. Like most regression based flow networks, RAFT has several architectural elements which bias the network towards modeling flow; eg, RAFT features an all-pairs cost volume, which compares all the pixels (in encoding space) in frame 1 to frame 2, and then accesses this cost volume through a lookup operation based on the flow. This network element strongly encourages the network to use pixel comparisons to generate the predicted flow. As a result, despite being trained on only AutoFlow (a synthetic dataset), RAFT can quickly learn to use pixel or patch similarity to compute optical flow, an ability which generalizes quickly to real world videos. In contrast, our diffusion pipeline lacks any of these specific model biases and must learn these biases through data. However, it is important to note that RAFT's ability to learn quickly comes with a tradeoff; namely, the inductive biases that are hard coded into its architecture may be suboptimal. Given enough data, learning these biases may be better than designing them through manual architecture design.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses. Overall I agree with the authors, but would like to have a followup discussion on one small point.
Regrading the limited performance on the Sintel ambush_1 sequence, the authors mentioned that "FlowFormer’s inductive bias (with the cost volume etc) make it better suited to reason about this sequence’s large out-of-frame motions". However, I guess the cost volume will also be less effective for out-of-frame motions since the points are not matchable across frames? Thus it seems less likely that encoding the matching cost in the architecture will solve this problem? Maybe leveraging some context information (e.g., motion smoothness/propagation in GMA and GMFlow) would be helpful? I think this is an open question and could be considered as future work.
This question doesn't affect my opinion, I am happy to accept this paper. Thanks.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. We agree with the reviewer that motion propagation and global motion aggregation are likely the main signals needed. We provide more details below.
Broadly, we see two categories of errors: (1) From out-of-frame motion. As the reviewer suggests, the motion smoothness/propagation priors used in existing works might help with this. (2) Inconsistent flow for texture-less background regions. For example, ambush_1 frame_1 has a foreground that splits the background into two, with texture on one section of the background and almost no texture on the other. This leads to an ambiguity with our model sometimes predicting consistent flow for the entire background but often not (note that for frames with less ambiguity, the diffusion model is able to successfully propagate the motion). We find that the RAFT model also struggles on such sequences, which suggests that smoothness is not a sufficient prior for handling such cases. As the reviewer mentioned, GMA and GMFlow are some of the first models to successfully address this example (and furthermore the ambush_1 sequence) and it seems likely that this is due to their modules for global motion aggregation and motion propagation, since these signals can align the flow in the background. Theoretically, the self attention blocks in our model should be capable of global motion aggregation so it is possible that our training data distribution does not sufficiently cover such scenes. We hypothesize that there may be multiple ways to solve this problem (data, model design, improvements to sampling, i.e., better approaches to aggregation or re-ranking) and are excited to explore this in future work.
The reason that the parenthetical in our original response starts with cost volume is because in current models the cost volume is the starting point for global motion aggregation modules (for FlowFormer, the aggregation happens as a post processing of the cost volume into a "cost memory", for GMA, the cost volume is processed into the "motion feature encoder" and then again into the "global motion aggregation" module), but the reviewer is correct to point out that the exact mechanism is global motion aggregation / motion propagation, which should have been mentioned. | Summary: The authors study the use of diffusion models for the tasks of single-image depth estimation and optical flow estimation. Self-supervised pre-training, supervised fine-tuning with synthetic and real data, combined with a couple of tricks to leverage imperfect GT, lead to competitive results with nearly no task-specific modifications to the diffusion models. The authors also demonstrated unique capabilities enabled by the diffusion models, e.g., capturing multimodality and completion from partial data.
Strengths: **Originality and significance**:
As far as I know, this is the first work to study the use of generative diffusion models for optical flow and depth estimation tasks. Training good optical flow and depth models typically requires extensive task-specific knowledge in terms of architecture designs and loss functions, so the authors' finding that diffusion models can compete well on these tasks with almost no task-specific treatments is non-trivial and valuable. The competitive results in various settings further add to the significance of the work.
**Technical quality**:
Training diffusion models successfully involves many technical details. The authors generally follow best practices and propose reasonable solutions to unique challenges. More specifically, the use of pre-trained PALETTE, further supervised pre-training with mixtures of synthetic and real datasets, addressing imperfect GT with infilling and step-unrolling, etc. are all well-motivated and proven effective.
**Writing quality**:
The paper is nicely written, with precise language, adequate details, and clear explanations. Conclusions are justified with plenty of results, visualizations, ablation studies, and overall convincing.
Weaknesses: The authors claim that diffusion models can be a generic framework for vision dense prediction tasks, but only consider the tasks of depth estimation and optical flow in this work. Both these tasks are relatively low-level, and it'd be interesting to offer some insights, discussions, or analyses regarding how higher-level tasks, such as semantic segmentation, differ from them and if the claim still holds.
Two relatively minor complaints/suggestions:
* Can authors provide some theoretical analysis or proof for the step-unrolling step? Could unrolling more steps bring further improvements?
* It's unclear how much the use of pallete self-supervised pre-training help since it's not part of the ablation study.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to the three points in the weakness section above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I agree with the limitation the authors brought up in the supplementary. Optical flow and depth estimation are low-level tasks commonly used at early stages of real-world application systems and therefore demand higher efficiency. As the authors already pointed out, this is where the proposed diffusion models fall short.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below.
> Do the claims hold on higher-level tasks such as semantic segmentation
The extent to which a generic diffusion model is effective on other vision tasks, including higher-level tasks is a topic of ongoing work. Diffusion models have indeed been shown to work well on panoptic segmentation [15], which is encouraging.
> Theoretical analysis or proof for step-unrolling
One can view an unrolling step at time t as a Langevin update of a MCMC sampler for which the target distribution is the marginal distribution of the latent $y_t$ (i.e., the distribution of noisy optical flow fields or depth maps). From this perspective unrolling steps act like corrector steps in the predictor-corrector sampler of Song et al [16]. We will include further discussion in the paper.
> Could unrolling more steps bring further improvements
Great question. For datasets in which the ground truth flow or depth maps have more missing data (e.g., like KITTI) one would expect more unrolling steps to be useful in matching the marginal distribution of the latent $y_t$. In our experiments we did find this to be the case (please see Table 2 in attachment). For instance, increasing the number of unrolling steps from one to four improves KITTI REL from 0.056 to 0.053, and the RMS from 2.700 to 2.568. On NYU, improvements are marginal (REL improves from 0.075 to 0.074 and RMS from 0.324 to 0.315) as one might expect since the ground truth data have fewer missing depth values.
> how much the use of pallete self-supervised pre-training help
Please see Tables 5 and 6 in our supplementary material. They show that self-supervised pre-training clearly improves results for monocular depth estimation. This isn’t entirely surprising since tasks like inpainting and colorization entail some form of ‘semantic understanding’. We expect similar findings for optical flow estimation, but we did not perform this study on optical flow estimation since pre-training is compute intensive.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. It is a very interesting work with valuable insights and practical values. I don't see any major weaknesses in this submission. I maintain my acceptance recommendation and have no further questions. | Summary: The paper demonstrates that diffusion models are effective general-purpose solutions for dense optical flow and monocular depth regression tasks. The paper shows that the same architecture and loss functions lead to at-par or better performance on these tasks, compared to existing methods that use domain knowledge and problem-specific architectures. The main insights presented are the use of a pre-training phase for higher quality, and imputation of the missing values, and a step-unrolled diffusion step, for dealing with incomplete GT training data.
Strengths: The paper is well-written and motivates the contributions perfectly. The simplicity of the architecture and the loss functions are very clear. The results support the claims and show at-par or better results than the state of the art. Extensive experiments on different real datasets help evaluate the method's quality. Multi-modality of the outputs is promising! I would have loved to see some more analysis there. The presented application is also exciting, showing directions for text to 3D reconstructions.
Weaknesses: While there is limited technical novelty, this is a good paper that demonstrates a simple method for solving two different regression tasks.
- I did not understand why the pre-training phase needs to be separate. The method uses supervised pre-training, that is different for each task. The loss functions are also identical between the pre-training and the fine-tuning states. Why not combine all available datasets and just train the model once (inc. all the tricks used for fine-tuning)? I did not understand this distinction between phases, especially when the experiments are explained and the datasets keep moving from one phase to the other.
- Step-unrolled diffusion is a little similar to self-guidance, introduced in "Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning" [Chen et al.] I wonder if self-guidance is already good at addressing the domain shift problem, or whether step-unrolled diffusion is really needed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Please answer the questions raised on pre-training, and on self-guidance.
- Is it possible to fine-tune an existing diffusion model, such as stable diffusion, instead of training one from scratch? Could it help avoid the use of synthetic data?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below.
> limited technical novelty
One of the main motivations for this paper is to understand how well vanilla diffusion models perform on classical dense computer vision tasks which are traditionally solved using specialized techniques. As a result, we intentionally kept the network design and diffusion formulation simple.
We do however, identify and solve several issues unique to the application of diffusion models for the tasks of optical flow and monocular depth estimation. For example, innovations in training are necessary for the application of diffusion models on dense regression problems with noisy and incomplete ground truth data. To that end, we introduce infilling and step-unrolling, which greatly improves the performance. For optical flow on the KITTI dataset, baseline diffusion tends to diverge and SOTA performance only results from the combination of both these additions. For KITTI depth, infilling and unrolling result in a substantial improvement in REL (0.222 to 0.056). In addition, we find that our optical flow network requires different training data than regression based optical flow techniques and solve this through a new training regime. Beyond these innovations, we also introduce a coarse-to-fine refinement scheme which provides further performance gains and greater flexibility in the image resolutions to which the model can be applied. We also demonstrate the use of imputation for text-to-3D generation.
But the main novelty in the paper is the demonstration that it is possible to generate SOTA results on well studied regression problems for which previous methods have relied heavily on specialized techniques such as the use of cost volumes [1] for flow and binning [14] for depth. We propose a common architecture and training procedure across two different dense vision problems. We think this framework is encouraging and motivates new directions for vision research, with a common architecture for many vision problems. Further we show that the diffusion framework is sufficiently powerful to capture the multi-modal predictive distributions (e.g. capturing ambiguity in flow or depth estimation).
> More analysis of multi-modality
The ability of diffusion models to represent complex multimodal distributions, without excessive mode collapse is arguably one of the key properties that has led to the recent success and excitement around diffusion models. Their ability to capture uncertainty in the predictive posterior distributions over depth and optical flow, including multi-modality in cases of ambiguity, is quite interesting. Figures 1-4 in the supplementary material show examples and variance heat maps of the multi-modal predictions for both depth and flow. For depth we observe multimodality in transparent and reflective surfaces, such as mirrors and windows of cars, and around object boundaries. For flow we observe multimodality in transparent surfaces and shadows, thereby capturing the layered nature of the scene. Notably we also observe multimodality in out-of-frame motion where the flow is ambiguous. We think this is an interesting finding and opens up further avenues of research and also provides a way to measure uncertainty which can be important for downstream applications.
> I did not understand why the pre-training phase needs to be separate.
Model pre-training on large-scale datasets is commonplace for optical flow [1, 2] and monocular depth [3, 4]. E.g. for optical flow the training data schedule (e.g. FlyingChairs -> FlyingThings3D -> mixture of Sintel / KITTI / VIPER etc.) has been heavily studied and shown to be crucial to achieve good performance (Table 5 of [12] and Table 1 of [13]). Considerable effort has recently been spent designing better pre-training datasets [10, 11]. This combination of pre-training and fine-tuning provides the advantages of large-scale datasets, with pre-trained models often yielding good zero-shot performance, with the ability to fine-tune models to a specific dataset to maximize performance (perhaps with some loss of generality).
> I wonder if self-guidance is already good at addressing the domain shift problem
In our experiments with self-guidance we found that it does not address the domain shift problem (see Table 1 in attachment). This is expected because, even with self-guidance, there remains a training/inference distribution shift for the latent $y_t$, which is what we address through step-unrolled diffusion training.
> Is it possible to fine-tune an existing diffusion model, such as stable diffusion, instead of training one from scratch? Could it help avoid the use of synthetic data?
Indeed, we use an existing self-supervised pre-trained diffusion model (i.e., the Palette model [5]) for image to image translation. As shown in Tables 5 and 6 of our supplementary material, this self-supervised pre-training substantially improves results.
It may also be possible to use a pre-trained text-conditional image generation model, especially in light of recent works such as [8, 9], which exploit features from pretrained text-to-image models for depth regressors. We briefly considered fine-tuning existing text-to-image models but found that available pixel-space models at the time, such as Imagen [7], were computationally expensive (2B params in the base model) which might limit their utility in practical vision applications and latent space models like Stable Diffusion additionally require dealing with holes in the autoencoder training (like in [6]). Hence, we left this exploration to future work.
It may be possible that the use of significantly more text-image training data or larger datasets for image to image translation would allow one to avoid training with synthetic data. We have not explored this but we agree that this is important future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the very well-written rebuttal. It clarifies the contributions and the other questions about the baselines and training details. | Rebuttal 1:
Rebuttal: References
[1] RAFT: Recurrent All-Pairs Field Transforms for Optical Flow, Teed and Deng, 2020
[2] FlowFormer: A Transformer Architecture for Optical Flow, Huang et al, 2022
[3] Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, Ranftl et al, 2020
[4] Vision Transformers for Dense Prediction, Ranftl et al, 2021
[5] Palette: Image-to-Image Diffusion Models, Saharia et al, 2022
[6] All in Tokens: Unifying Output Space of Visual Tasks via Soft Token, Ning et al, 2023
[7] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, Saharia et al, 2023
[8] Unleashing Text-to-Image Diffusion Models for Visual Perception, Zhao et al, 2023
[9] Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model, Chen at al, 2023
[10] AutoFlow: Learning a better training set for optical flow, Deqing et al 2021
[11] Self-supervised AutoFlow, Huang et al, 2023
[12] Models Matter, So Does Training: An Empirical Study of CNNs for Optical Flow Estimation, Sun et al, 2018
[13] FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks, Ilg et al, 2017
[14] AdaBins: Depth Estimation using Adaptive Bins, Bhat et al, 2020
[15] A Generalist Framework for Panoptic Segmentation of Images and Videos, Chen et al, 2023
[16] Score-based Generative Modeling through Stochastic Differential Equations. Song et al, 2021
[17] Vision Transformers for Dense Prediction, Ranftl et al, 2021
[18] Text-to-Image Diffusion Models are Zero-Shot Classifiers, Clark and Jaini, 2023
[19] Diffusion Models Beat GANs on Image Synthesis, Dhariwal et al, 2021
Pdf: /pdf/a592304d4dd61a6d0c2d63f5aada07e86a118220.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Repeatable Speech Embeddings Using An Intra-class Correlation Regularizer | Accept (poster) | Summary: Starting with the observation that supervised embeddings should vary from one class to another but not be sensitive to variations within a given class, this paper proposes to add an intra-class correlation (ICC) regularization to the contrastive loss in representation learning. This new regularizer is derived from measurement theory and encourages the embeddings to have a low intra-class variance, whereas contrastive losses pushes more the embeddings towards high inter-class variances. After a simulation illustrating the "behaviour" of these losses with respect to inter- and intra-class variance, the authors report good improvements using the proposed ICC regularizer for three different tasks.
Strengths: - The paper is overall well written, the idea well explained and easy to follow and to understand
- The ICC regularizer seems to really focus on intra-class variance, i.e. make all occurences of the same class have the same embeddings, where contrastive loss does both at the same time and never really focuses on minimizing the intra-class variance.
- The method is relatively simple, very well explained and easy to understand. The simulation, comparison with contrastive loss and classification error rate is quite interesting.
- When the embeddings are constrained, it is indeed useful to focus on minimizing the intra-class variance to separate classes. The explanation and results are convincing.
Weaknesses: - Missing comparison with other techniques mentioned in the related work (data augmentation, etc.). Is the proposed approach competitive with these methods? Could it be complementary? How does it compare to VICReg [4] for example
- In the result section, the text mainly lists the results in the tables. Maybe a reformulation would allow more space for example to study the effect of the regularization weight
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Could that method apply to self-supervised approaches, where the notion of classes is less well defined?
Is it competitive with other methods mentioned at the beginning of the paper? Or could it be complementary?
Are the improvements with that method bigger on smaller datasets or are they independent of the dataset size? Is the method especially suited for some tasks and less for other (e.g. could that be useful for classification, or ASR?)?
minor remark:
- sec 3.1: "represents the i-th samples" -> sample
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer hWT5 for the comments.
**Response to "Is ICC competitive with other techniques? Or could it be complementary?":** In Lines 30-31, we identify two primary groups of approaches for managing the intra-class variance: (1) enhancing the diversity of the training data, and (2) utilizing innovative learning algorithms. Below we discuss how the ICC is related to both types of approaches.
1. Our results show that the ICC regularizer is complementary to approaches belonging to the first group. For example, data augmentation was implemented in the speaker verification task (Section 4.1) and dysphonic voice detection task (Section 4.3) for both contrastive loss only and contrastive loss + ICC regularizer experiments. The ICC regularizer provides additional improvement on top of data augmentation. Implementing the ICC regularizer can indeed contribute to further minimizing intra-class variance, thus complementing those methods.
2. Our results show that the ICC regularizer provides benefit over GE2E, AngleProto, and SupCon (see Section 4.1). It is unclear that if ICC regularizer is complementary over all innovative learning algorithms. This will depend on the optimization criteria for each loss. For example, the algorithm proposed in (Rizve et al., 2021; [42] in the manuscript) focuses on using cosine similarity to construct the cost function. This is considerably different than the ICC and does not directly overlap with our ICC regularizer's functionality, we believe that our ICC regularizer could be a compatible addition to this algorithm. Since this focus does not directly overlap with our ICC regularizer's functionality, we believe that our ICC regularizer could be a compatible addition to this algorithm. In contrast, VICReg (Bardes et al., 2021; [4] in the manuscript) focuses on the variance of embedding vectors, which may be redundant with our regularizer.
We will add a discussion about "Comparison between the ICC regularizer and other intra-class variance minimizing techniques" in Section 3.2 of revised manuscript.
(Rizve et al., 2021) - Rizve, Mamshad Nayeem, et al. "Exploring complementary strengths of invariant and equivariant representations for few-shot learning." _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 2021.
(Bardes et al., 2021) - Bardes, Adrien, Jean Ponce, and Yann LeCun. "Vicreg: Variance-invariance-covariance regularization for self-supervised learning." _arXiv preprint arXiv:2105.04906_ (2021).
**Response to "more details about the effect of the regularization weight":** We will add more context to manuscript about regularization weight in revision based on the information provided Supplemental Material - Appendix Section 2.
**Response to "apply to self-supervised approaches":** Any self-supervised method that can use contrastive-style training can also use the ICC. For example, consider the self-supervised deep learning methods proposed in (Falcon et al., 2020; Ciga et al., 2022): here, the anchor-positive pair could be treated as an intra-class data pair, and the anchor-negative pair treated as an inter-class data pair. This is an interesting avenue of future work, and we will mention this possibility in the revision at the end of the paper.
(Falcon et al., 2020) - Falcon, W., & Cho, K. (2020). A framework for contrastive self-supervised learning and designing a new approach. arXiv preprint arXiv:2009.00104.
(Ciga et al., 2022) - Ciga, O., Xu, T., & Martel, A. L. (2022). Self supervised contrastive learning for digital histopathology. Machine Learning with Applications, 7, 100198.
**Response to "ICC regularizer effectiveness dependency on dataset size":** The improvements with the given method might be more pronounced on smaller datasets. Regularization techniques, in general, tend to be more useful for smaller datasets. However, it's hard to provide an unequivocal answer without a deeper analysis of the specific method and context. Depending on the method and the nature of the data, the impact might vary.
**Response to "ICC regularizer suitability for specific tasks":** The method appears to be more suited for tasks where repeatability is a crucial factor, such as speaker verification and classification. It might be particularly beneficial for these applications, as consistency and reliability might be key in achieving high performance. On the other hand, for tasks like automatic speech recognition or pure end-to-end translation systems, where repeatability might be less critical, the method may not be as effective. Further analysis and experimentation would be required to determine the full scope of its applicability across different tasks and domains. | Summary: The paper introduces the ICC regularizer, a novel regularization technique that complements the contrastive loss to enhance the repeatability of embeddings. The authors illustrate the reason why the ICC regularizer better minimizes the intra-class variance than the contrastive loss, offering a fresh perspective on latent representation learning. The experimental results demonstrate that the ICC regularizer enhances the repeatability of learned embeddings. Consequently, embeddings with higher repeatability exhibit superior performance in downstream tasks.
Strengths: 1. This paper links the concept of repeatability, derived from measurement theory, to deep-learned embeddings, which is novel and interesting. The authors present an explanation of how the ICC regularizer effectively reduces intra-class variance compared to the contrastive loss, offering a novel perspective on latent representation learning.
2. To enhance the repeatability of the learned embeddings, this paper introduces a novel regularizer called the ICC regularizer, which is used as a regularizer for the contrastive loss.
3. The experimental results indicate that using the ICC regularizer leading to better performance in three downstream tasks .
Weaknesses: My main concerns are mainly related to the experiments. Please refer to the following Questions section for details.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Section 4.2, ''GE2E Loss + ICC Regularizer'' shows superior speech naturalness on subjective evaluations. However, the objective metrics like speaker similarity scores and word error rate (WER) are missing.
2. In Section 4.2, the paper aims at zero-shot voice style conversion, which requires highly generalized speaker embedding. However, the authors train and evaluate the zero-shot voice style conversion on a small dataset called VCTK. I would like to ask if the generalization performance of the proposed method be further improved by using larger training datasets? Additional experiments on larger dataset would make the paper more convincing.
3. Although the audio samples with ICC regularizer is better, after listening to the audio samples in the supplementary materials, I think the audio quality should be further improved for practical scenarios. I would like to ask is this due to the used AutoVC model? Could the authors use the latest SOTA voice conversion model to improve the speech quality?
I would like to raise my scores if the authors have fully addressed my concerns.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitations of the work are discussed. The potential negative societal impact of the work is not addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 89AX for the comments.
**Response to Question 1:** We evaluate two methods objectively by using a speaker similarity score and word error rate (WER). We use an opensource speaker encoder (huggingface.co/speechbrain/spkrec-ecapa-voxceleb) to calculate the speaker similarity score between target speaker's audio and transformed output. And we use Wav2Vec2 (huggingface.co/docs/transformers/model_doc/wav2vec2) to do ASR on the transformed output, and calculate the WER and character error rate (CER) by using *jiwer* module (pypi.org/project/jiwer/2.5.1/). The results are given as follows:
(VC model used here is AutoVC.)
||Speaker Similarity Score|WER|CER|
|--|--|--|--|
|GE2E Loss|0.2231|0.5810|0.3817|
|GE2E Loss + ICC Regularizer|0.2309|0.5109|0.3324|
The results demonstrate that speaker embeddings with higher repeatability also result in better performance across all three metrics for voice style conversion.
**Response to Question 2:** Using VCTK dataset solely for training is a widely adopted strategy in voice conversion researches, including in SOTA models (Tang et al., 2022; Casanova et al., 2022; Li et al., 2023; ). Other larger speech datasets such as LibriTTS and Common Voice are only used for testing in select papers (Li et al., 2023).
Both LibriTTS and Common Voice are indeed substantial datasets, but they have different characteristics compared to VCTK.
**VCTK**: Specifically designed for studying various speech processing tasks, including voice conversion. The dataset contains diverse accents but with relatively clean recordings.
**LibriTTS**: LibriTTS is designed more for TTS (Text-To-Speech) tasks. It's derived from audiobooks, so the speaking style is primarily a reading style.
**Common Voice**: This dataset has a broader goal of collecting voice data for a variety of languages to support voice technologies. The quality and style of recordings vary greatly in this dataset. What's more, many speakers have very limited recordings in this dataset.
Voice conversion benefits from consistent and clean recordings, as it aims to capture and transfer speaker characteristics without extraneous noise or variability. And voice conversion requires capturing the nuances of individual speaker characteristics. So, compared to VCTK, the LibriTTS and Common Voice are not suitable for voice conversion model training.
The above information gives some insights about why most VC research prefers VCTK over larger but noisier databases. However, as we found the reviewer's comment very interesting, we tried to train the AutoVC model on the Common Voice dataset and tested the model on the VCTK dataset under the unseen-to-unseen speaker scenario. The model is trained with "GE2E loss" (with and without the ICC) speaker encoder described in Section 4.1. We only report objective scores, including the speaker similarity score, WER and CER here, as there was not enough time to conduct AB preference experiments.
(VC model used here is AutoVC.)
|AutoVC|Speaker Similarity|WER|CER|
|--|--|--|--|
|GE2E Loss + train on VCTK|0.2231|0.5810|0.3817|
|GE2E Loss + train on Common Voice|0.1934|0.5958|0.3942|
|GE2E + ICC Regularizer + train on VCTK|0.2309|0.5109|0.3324|
|GE2E + ICC Regularizer + train on Common Voice|0.2131|0.5308|0.3544|
The results reveal that employing a larger dataset for training the VC model does not necessarily lead to improved performance. However, the proposed ICC regularizer is still effective when using a large training dataset and leads to a better performance on several metrics.
(Tang et al., 2022) - Tang, Huaizhen, et al. "Avqvc: One-shot voice conversion by vector quantization with applying contrastive learning." _ICASSP 2022_. IEEE, 2022.
(Casanova et al., 2022) - Casanova, Edresson, et al. "Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone." _ICML 2022_. PMLR, 2022.
(Li et al., 2023) - Li et al. "Freevc: Towards High-Quality Text-Free One-Shot Voice Conversion." _ICASSP 2023_. IEEE, 2023.
**Response to Question 3:** We use a SOTA voice conversion model to conduct the experiment outlined in Section 4.2. We selected a recently published work, FreeVC, for this purpose. Details of FreeVC can be found on GitHub at github.com/OlaWod/FreeVC and the paper is accessible at arxiv.org/abs/2210.15418.
For the experiment, we utilized the version of FreeVC without speaker regularization (w/o SR), as it has the fastest training speed compare other two versions and we have limited time for training this model. We trained it using the speaker encoder mentioned in our paper (Line 258-287). Both the training and testing were performed exclusively on the VCTK dataset. However, the training is longer than our expectation. We use the code provided by the author of FreeVC to train the model on an Nvidia A100 card. The model is still training, however we report an intermediate result below (the paper recommends 900k steps, results below are at ~500k steps) (Li et al., 2023). We report the objective scores, including the speaker similarity score, WER and CER for the 500k-steps-trained FreeVC model here.
(VC model used here is FreeVC w/o SR, trained 500k steps.)
||Speaker Similarity Score|WER|CER|
|--|--|--|--|
|GE2E Loss|0.2753|0.2556|0.0755|
|GE2E Loss + ICC Regularizer|0.2899|0.2163|0.0718|
Even though we did not complete the FreeVC training, the results have already shown improved output performance over the results reported in the paper by using a SOTA voice conversion model. The reported speaker similarity scores, WER and CER, demonstrate that speaker embeddings with higher repeatability also result in better performance for voice style conversion tasks. It is important to note that the intermediate results reported herein cannot be directly compared to those in the paper as the model is still training.
(Li et al., 2023) - Li et al. "Freevc: Towards High-Quality Text-Free One-Shot Voice Conversion." _ICASSP 2023_. IEEE, 2023. | Summary: The paper presents a simple but seemingly effective idea to regularize contrastive embedding extractors. The regularizer aims at the intra-class correlation to ensure the repeatability of the embedding.
The effectiveness is confirmed experimentally for the task of speaker verification, voice style conversion, and dysphonic voice detection.
Strengths: The presented idea is well introduced and the illustration of the ICC regularizer action compared to the GE2E loss is insightful. The evaluation is sufficient.
Weaknesses: The introduced regularizer uses 2nd order statistics only which could be a limiting factor on the embedding extractors but on the other hand this doesn't need to be required.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The contrastive loss has a direct relationship to the mutual information, one could think that ICC is automatically included in the mutual information when only 2nd order statistics are taken into account, what could be the reason that this doesn't seem to be the case.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The introduced regularizer uses 2nd order statistics only.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Yotw for the comments.
It is true that the ICC relies only on second-order statistics and it’s possible that it may not capture complex, nonlinear relationships within or between groups. That said, it does have several benefits over other more complex measures:
1. This simplicity often means fewer assumptions and parameters compared to some nonlinear measures. With small sample sizes, complex models that require estimation of numerous parameters may suffer from overfitting or instability, whereas the ICC may provide more robust estimates.
2. The ICC is a standard measure used in many fields, including psychometrics, medical research, and reliability engineering. Its widespread acceptance means that results based on the ICC can be easily compared with other studies or benchmarks.
3. The reliance on second-order statistics makes the ICC computationally efficient.
4. To the best of our knowledge there are no commonly-used measures of repeatability that are based on higher-order statistics.
While the example contrastive loss in our manuscript, the GE2E loss shares some high-level goals with MI maximization (such as learning informative representations), as far as we know, it does not have a direct mathematical relationship to MI. The connection between contrastive learning and MI is more explicitly established in other contrastive methods like InfoNCE loss, which are specifically designed to approximate MI maximization. Those loss functions are less commonly used in speech, owing to the superior performance demonstrated by GE2E loss in speech tasks. While the Pearson correlation (different from the ICC measure we use here) does have a connection with the MI, to the best of our knowledge, there is no standard or direct mathematical formula that connects ICC and MI. They are derived from different statistical principles. | Summary: In this paper, authors propose to use an intra-class correlation coefficient (ICC) regularizer as a complimentary component for contrastive loss to increase the repeatability of the supervised speech embedding. Authors conduct simulation for ICC regularizer and contrastive loss (GE2E loss) to explain their similarities and differences in the optimization criteria. Based on the experimental results on the speaker verification, voice style conversion, and a clinical application for detecting dysphonic three tasks, the authors claim that adding an ICC regularizer helps increase the repeatability and further improve the performance on downstream tasks.
Strengths: 1. This paper leverages the concept of repeatability and states that higher repeatability can contribute to improved performance in downstream tasks. Authors think about the importance of intra-class variance and the proposed ICC regularizer places greater emphasis on minimizing it to help increase repeatability. The simulation results in Section 3.2 explains the intuition clearly.
2. As a complementary component to contrastive loss, adding the proposed ICC regularizer successfully increase the repeatability and shows good performance in the three different tasks.
Weaknesses: 1. In Task 1, the performance of SV system is not good though adding the ICC regularizer improves its performance. In this paper, VGG-M-40 is trained on VoxCeleb1 & 2. The best result (3.78% EER) reported in Table 1 is not as good as other commonly used models for SV. In Chung et al. paper, the VGG-M-40 shows much worse performance than Thin-ResNet34 and Fast ResNet34. To make the experimental solid and results and claims more convincible, authors should pick a good baseline instead of use the VGG-M-40.
2. For better replicable, more experiment details should be provided, such as the training details of Task 1.
3. To have comprehensive evaluation results for Task 1, the evaluation metric minDCF can be used as well.
4. Visualization (such as t-SNE) can be added as a comparison to see if the extracted speech or speaker embeddings have smaller intra-class variance in the space, though their ICC is decreased.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. As this paper mentioned, the ICC regularizer requires hyperparameter fine-tuning. Are the hyperparameter tuned on a validation set or a development set?
2. As this paper mentioned in Section 3.2, though the ICC increased, do the learned embeddings are more compacted within-class and more separate between-class in the overall embedding space? Authors can use t-SNE to visualize them.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors do not analyze the potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer efaw for the comments.
**Response to Weakness 1:** In this study, our focus is not on demonstrating that the proposed method can reach SOTA performance across various tasks. Rather, we aim to illustrate that the proposed ICC regularizer can improve the repeatability of learned embeddings, improving the performance of downstream applications. Additional experiments demonstrate that even with models better than the ones used in the paper, the proposed ICC regularizer can still improve performance. For example, we re-run VC task with a SOTA VC model named FreeVC with our speaker encoder (manuscript Line 258-287) to conduct the experiment outlined in Section 4.2. (For details please see response to Reviewer 89AX.)
(VC model used here is FreeVC w/o SR.)
||Speaker Similarity Score|WER|CER|
|--|--|--|--|
|GE2E Loss|0.2753|0.2556|0.0755|
|GE2E Loss + ICC Regularizer|0.2899|0.2163|0.0718|
While this model is still training (authors suggest 900k steps), the results above represent an intermedia evaluation of the model. The higher speaker simialrity score, the better performance of model. The lower WER & CER, the better performance of model. As the results show, even at this intermediate evaluation, the consistent pattern from the SOTA VC model serves as tangible evidence that including the ICC regularizer with existing cost functions demonstrates consistent improvements across several performance metrics.
We ran out of time for training the Fast ResNet34 model. When the FreeVC model training is complete, we will train the Fast ResNet34 model referenced in Chung et al.'s paper for the speaker verification task and refine and optimize our methodology.
**Response to Weakness 2:** In the revision, we will add following contents to Section 4.1 Experiment Setup:
*During training, we employ the Adam optimizer, maintaining a static learning rate of 0.001 without implementing any learning rate schedule. The dropout rate is set to 0.2 for all dropout layers. As for data augmentation: (1) we use variation in input audio length by randomly fixing the audio duration within a range of 1.5 to 3.0 seconds, and (2) we add Gaussian noise with an SNR randomly selected between 15 to 60 dB. No other augmentation methods are used.*
**Response to Weakness 3:** We follow the reviewer's comment and add the minDCF metric to Table 1. For the minDCF, hyper-parameters of $P_{target}=0.05$, $C_{fa}=C_{miss}=1$.
||minDCF|
|--|--|
|GE2E|0.2925|
|GE2E+ICC|0.2778|
|AngleProto|0.2809|
|AngleProto+ICC|0.2790|
|SuperCon|0.2791|
|SuperCon+ICC|0.2597|
The results demonstrate that the ICC regularizer improves model performance relative to the minDCF metric on the TI-SV task. The improvement in the metric is clear for all three baseline models to which the ICC regularizer is added.
**Response to Weakness 4 and Question 2:** We follow the reviewer's comment and generate the figure of speaker embeddings t-SNE projection. We use two trained speaker encoders from Section 4.1, i.e., GE2E and GE2E+ICC, to generate the speaker embeddings. For the audio data, we randomly select 12 speakers from the test dataset and randomly select 10 samples from each speaker. After using the two encoders to extract all embeddings for all samples, we run t-SNE on these embeddings, and the results are shown in Figure B (in global response PDF). Please take a look at Speaker No. 4 & 7: in Figure B(a), the two speakers’ embeddings are mixed with other speakers, but in Figure B(b), the two speakers become separate from other speakers due to the fact that the ICC regularizer improves the repeatability of embeddings relative to speaker identities.
**Response to Question 1:** The hyper-parameter is tuned on the development dataset. The EER for subjects in the development dataset are used to determine the optimized hyperparamter. We will be sure to highlight this in the revision.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their feedback and some new experimental results. The explanations almost solve all my concerns. Although authors explained and mentioned that “our focus is not on demonstrating that the proposed method can reach SOTA performance across various tasks”, I totally understand that. Authors do not have to use the SOTA SV baseline or achieve the SOTA performance. But the performance of VGG-M-40 reported in this paper is much worse than current common SV system performance (e.g., ResNet and ECAPA-TDNN with around 1.0% EER on VoxCeleb1-O). I am looking forward to seeing the results of using a better SV baseline system such as Thin ResNet34 or Fast ResNet34.
I will reconsider the review in light of the provided clarifications and raise the score. | Rebuttal 1:
Rebuttal: We thank all reviewers for the comments.
**Response to "discussion on potential negative societal impact":**
Methods for learning new feature representations that focus on separability between classes can amplify biases that exist in the data. This is a well-known problem and it can occur when the data used to train the representation model is biased. This is especially problematic in high-stakes applications like healthcare, where biased predictions or decisions can lead to unequal treatment or access. Safe deployment of models based on the feature representations proposed herein will require thorough validation to detect potential biases and mitigation strategies for dealing with them. We will add this discussion to the revised paper.
**Response to "experiments and details of experimental setup":**
In the revision, we will add the following information to Section 4.1 Experiment Setup:
*During training, we use the Adam optimizer, maintaining a static learning rate of 0.001 without implementing any learning rate schedule. The dropout rate is set to 0.2 for all dropout layers. As for data augmentation: (1) we use variation in input audio length by randomly fixing the audio duration within a range of 1.5 to 3.0 seconds, and (2) we add Gaussian noise with an SNR randomly selected between 15 to 60 dB. No other augmentation methods are used.*
If accepted, we will add the following information about datasets we used in the supplemental material:
**VoxCeleb 1**: A large-scale speaker recognition dataset consisting of short video clips from YouTube. It includes over 100,000 utterances from more than 1,200 celebrities across various professions and demographics.
**VoxCeleb 2**: An extension of VoxCeleb 1, VoxCeleb 2 is an even larger dataset featuring approximately 1 million utterances from over 6,000 speakers. Together, VoxCeleb 1 and VoxCeleb 2 offer rich resources for training and evaluating speaker recognition models.
**VCTK** (The Voice Cloning Toolkit): VCTK is a speech dataset that includes recordings of various English accents. With over 44 hours of speech from 109 speakers, each speaking in their accent, VCTK provides a valuable resource for multi-accent speech synthesis and recognition research.
**MEEI** (Massachusetts Eye and Ear Infirmary): The MEEI Voice Disorders Database is a collection of speech samples from individuals with and without voice disorders. Participants are English speakers. It is often used in medical and clinical research to study voice pathology and develop systems to detect and analyze voice disorders. The MEEI database contains more than 1400 recordings of sustained phonations, which are collected from 53 healthy speakers and 657 speakers diagnosed with different types of dysphonia.
**SVD** (Saarbrücken Voice Database): The Saarbrücken Voice Database is a collection of voice recordings used for various phonetic and clinical studies. Participants are German speakers. It provides a comprehensive set of voice samples, including those from individuals with different voice disorders, aiding in the research of voice quality and characteristics. SVD database contains the voice recordings from more than 2000 speakers (428 healthy females, 259 healthy males, 727 dysphonic females, 629 dysphonic males).
**HUPA** (Hospital Príncipe de Asturias): Similar to MEEI and SVD, HUPA a collection of speech samples from individuals with and without voice disorders. Participants are Spanish speakers. HUPA contains /a/ sustained phonation recordings of 366 adult Spanish speakers (169 dysphonic and 197 healthy).
Pdf: /pdf/168fa94b31369c92c60bb4b34c054cf907887941.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the concept of repeatability from measurement theory was introduced for representation learning. Intraclass Correlation (ICC) was proposed as an evaluation metric and the ICC regularizer was used as an additional element to contrastive loss for training. This aims to enhance the repeatability of deep-learned embeddings.
Some synthetic examples and intuitive reasoning were used to illustrate why the ICC regularizer is superior in minimizing intra-class variance compared to contrastive loss. The ICC regularizer was assessed across three speech tasks: speaker embeddings for Text-Independent Speaker Verification (TI-SV) and zero-shot voice style conversion, along with voice feature embeddings for a clinical application. The experiments indicate that the ICC regularizer can boost the repeatability of learned embeddings and perform better in downstream tasks.
Strengths: The paper proposed a novel metric of ICC based on measurement repeatability. Although the term may be a little confusing, the motivation
and intuition were well explained. The experimental results on three speaker/speech tasks are convincing.
Weaknesses: To show the repeatability of speaker embedding for TI-SV task, more controlled experiments may be necessary to prove the effectiveness of ICC loss in addition to the ICC scores: for example, adding noise to the testing sample, or switching languages of the same speaker as it was tested in the NIST Speaker Recognition Evaluation.
For results on dysphonic voice detection in Table 3, other baseline models were definitely over-trained compared to the proposed model.
Why do you train your own models instead of quoting the numbers from the original papers? (see line 334-335). The results in Table 3 may be unfair comparisons.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the comments above in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No discussion on negative societal impact was included in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer wtbk for the comments.
**Response to "more controlled experiments may be needed":** We follow the reviewer's suggestion and run more testing on audio samples with noise added. We evaluate three SNR levels, 30, 35 and 40 dB. For each input audio, we randomly generated Gaussian noise and scale the noise to the desired energy level. We add the audio and the noise to get the noisy audio at the desired SNR level. The EER results are provided in the following table. The results show that adding the ICC regularizers improves the original loss for all 3 SNR levels evaluated. Furthermore, the performance gain is greater for the lower SNR (30dB) than for the other two conditions, although the differences are small.
|Method|SNR=40dB|SNR=35dB|SNR=30dB|
|--|--|--|--|
|GE2E|5.64%|7.00%|9.04%|
|GE2E+ICC|5.34%|6.76%|8.66%|
|AngleProto|4.77%|5.76%|7.66%|
|AngleProto+ICC|4.52%|5.59%|7.34%|
|SupCon|4.35%|5.21%|6.25%|
|SupCon+ICC|4.21%|5.09%|6.10%|
**Response to "why not quoting the numbers from the original papers"**: We can quote the reported numbers from the original paper for Zhang, Harar, and Huckvale, although the results from Verde are not comparable. Firstly, it's important to realize that (Verde et al., 2018), did not utilize the complete SVD dataset; the authors relied on a subset of well-separated samples, painting an overoptimistic performance of how well the method works. Authors of the three baseline methods (Harar et al., 2017; Huckvale et al., 2021; Zhang et al., 2022) all express this concern about the (Verde et al, 2018) paper.
We list the methods' SVD testing accuracy of the original papers and our rebuilt results as follows:
|Methods|SVD Testing Acc - Original Paper|SVD Testing Acc - Our Rebuilt|
|--|--|--|
|Our Proposed Method|N/A|0.7289|
|Zhang et al. (2022)|0.7077|0.7077|
|Harar et al. (2017)|0.6808|0.6914|
|Huckvale et al. (2021)|0.6974|0.6255|
We will add this information to Table 3 in the revision. The new data does not impact our conclusion in the manuscript as our method outperforms the three baseline methods, Harar et al. (2017), Huckvale et al. (2021), and Zhang et al. (2022). And the following additional context explains why we had to rebuild the models:
Two baseline methods (Verde et al., 2018 and Huckvale et al., 2021) both employ SVM-based algorithms. Yet, neither of these studies provides details on the training accuracy within their papers. To ensure a comprehensive and fair evaluation, we decided to report the training accuracies for our method as well as the baseline methods. This required us to reproduce their methods, thereby allowing for a more transparent and reliable comparison.
Except for Zhang et al. (2022), all other baseline methods do not provide enough information to reproduce their work, which may lead to degraded performance of our rebuilt model compared to their original works. For example, Hara et al. did not release the code publicly, and Huckvale et al. did not provide their feature selection criteria, how the features were selected, or what the final list of features used in the model was.
(Harar et al, 2017) - Harar, Pavol, et al. "Voice pathology detection using deep learning: a preliminary study." _2017 international conference and workshop on bioinspired intelligence (IWOBI)_. IEEE, 2017.
(Verde et al., 2018) - Verde, Laura, Giuseppe De Pietro, and Giovanna Sannino. "Voice disorder identification by using machine learning techniques." _IEEE access_ 6 (2018): 16246-16255.
(Huckvale et al., 2021) - Huckvale, Mark, and Catinca Buciuleac. "Automated detection of voice disorder in the Saarbrücken voice database: Effects of pathology subset and audio materials." _Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH_. Vol. 6. International Speech Communication Association (ISCA), 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for more experimental results and explanations on baselines. However, I keep the score unchanged due to no additional value added to the main theme of the paper. It would be better to choose the right baselines in the future due to the various reasons mentioned in the rebuttal. | Summary: This paper proposes to use a traditional intra-class correlation coefficient (ICC) to assess the repeatability of speech embeddings learnt by neural network. The proposed ICC regularizer has characteristics similar to well-known contrastive loss which aims to minimize the intra-class variance and maximize the inter-class variance of training data. The authors propose to use the ICC regularizer not in isolation but in combination with the contrastive loss. This paper performs several types of experiments based on speech processing tasks such as speaker verification, voice style conversion, and dysphonic voice detection to demonstrates the effectiveness of the proposed regularizer. These tasks focus on the importance of detecting the repeatability for the target data.
Strengths: - The considered scenario is interesting and important to the community. The proposed method is simple yet effective in assessing the repeatability in the target data.
- This paper assesses the effectiveness of the proposed method not only with a single speech task but also with several tasks based on voice quality evaluation. The improvements look sufficient.
Weaknesses: - ICC is very traditional metric, which is widely used for various data classification algorithms.
- The proposed ICC regularizer shouldn't be used in isolation.
- Lack of some experiments and details of experimental setup (see questions below).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - It looks like that the proposed ICC regularizer corresponds to the special case of contrastive loss focused only on positive sample pairs. Although there are many variants in contrastive loss, the basic concept is to increase a capability of category classification of data by leveraging both positive and negative pairs. I am wondering if the training only with conventional contrastive loss could provide some improvements similar to the proposed method, for example, by simply changing the ratio of positive sample pairs (reducing negative sample pairs) in the last half of the training stage. What are advantages that cannot be obtained by just changing the ratio of positive and negative pair data?
- It is difficult to understand the effectiveness of the proposed method correctly because experimental setups including data are very briefly written. For example, in the experiment described in Section 4.1, what are the training strategy of the model such as an optimizer, a learning rate scheduling, other regularization techniques (specAugment, DropConnect, etc.), data augmentation, and so on?
- In addition, a summary of data used for each experiment (a few more details) would be very much helpful.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The same comments as one of weaknesses. The proposed ICC regularizer shouldn't be used in isolation but be used in combination with a conventional contrastive loss as clearly mentioned in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer Qvpp for the comments.
**Response to "ICC is a traditional metric and widely used for various classification tasks":** The ICC is primarily a statistical measure that describes how strongly units in the same group resemble each other. It's typically used in various research fields like psychology, sociology, or medicine, where repeated measurements are taken on the same subjects to assess the reliability of ratings or measurements. The ICC is not a commonly used metric in machine learning classification tasks which typically rely on accuracy, F1, AUC-ROC, etc. Sometimes the ICC is used in machine learning classification tasks related to clinical applications (Ugga et al. 2021) for assessing the reliability of models but we are not aware of other classification papers which utilize ICC.
Based on existing literature, employing the ICC as a measurement tool for assessing the quality of learned embeddings represents a novel contribution. Utilizing the ICC regularizer to ensure the model learns consistent and repeatable latent representations further distinguishes our approach as innovative. Within representation learning, our introduction of the ICC metric and ICC regularizer constitutes a significant contribution.
(Ugga et al. 2021) - Ugga et al. "Meningioma MRI radiomics and machine learning: Systematic review, quality score assessment, and meta-analysis." _Neuroradiology_ 63 (2021): 1293-1304.
**Response to "ICC regularizer shouldn't be used in isolation":** As outlined in Lines 224-225 of the manuscript, we advocate for using the ICC as a regularizer rather than as a stand-alone loss function. The ICC regularizer is designed to act as a complementary component in learning repeatable latent representations. It's important to recognize that most regularizers do not function as stand-alone loss functions; instead, the ICC regularizer is intended to work synergistically with other components of the loss function to enhance the reliability and coherence of the learned embeddings. As additional evidence of the utility of the regularizer, we have conducted the following additional experiments:
(1) Run speaker verification (SV) testing on audio samples with noise added. We evaluate three SNR levels, 30, 35 and 40 dB. Adding the ICC regularizers improves the original loss for all 3 SNR levels evaluated. (Details please see response to Reviewer wtbk.)
|Method|SNR=40dB|SNR=35dB|SNR=30dB|
|--|--|--|--|
|GE2E|5.64%|7.00%|9.04%|
|GE2E+ICC|5.34%|6.76%|8.66%|
|AngleProto|4.77%|5.76%|7.66%|
|AngleProto+ICC|4.52%|5.59%|7.34%|
|SupCon|4.35%|5.21%|6.25%|
|SupCon+ICC|4.21%|5.09%|6.10%|
(2) Evaluated the two methods in Section 4.2, voice conversion (VC), objectively on three different metrics: speaker similarity score, word error rate (WER), and character error rate (CER). (Details please see response to Reviewer 89AX.)
(VC model used here is AutoVC)
||Speaker Similarity Score|WER|CER|
|--|--|--|--|
|GE2E Loss|0.2231|0.5810|0.3817|
|GE2E Loss + ICC Regularizer|0.2309|0.5109|0.3324|
The higher speaker similarity score, the better performance of model. The lower WER & CER, the better performance of model.
(3) Re-run VC task with a SOTA VC model named FreeVC with our speaker encoder (manuscript Line 258-287) to conduct the experiment outlined in Section 4.2. (For details please see response to Reviewer 89AX.)
(VC model used here is FreeVC w/o SR.)
||Speaker Similarity Score|WER|CER|
|--|--|--|--|
|GE2E Loss|0.2753|0.2556|0.0755|
|GE2E Loss + ICC Regularizer|0.2899|0.2163|0.0718|
While this model is still training, the results from an intermedia evaluation of the model tell a consistent story: when the regularizer is added to existing cost functions, performance improves on several metrics.
**Response to "ICC regularizer corresponds to the special case of contrastive loss focused only on positive sample pairs":** The ICC regularizer cannot be posed as a special case of contrastive loss focused only on positive sample pairs as it requires both positive and negative sample pairs. The ICC regularizer and contrastive loss have similarities in their optimization criteria: both aim to minimize the intra-class variance by using positive sample pairs and maximize the inter-class variance using negative sample pairs. However, compared to contrastive loss, the ICC regularizer results in a better trade-off between minimizing intra-class variance and maximizing inter-class variance when inter-class variance is relatively larger than intra-class variance. This was shown in Fig. 1 in the manuscript.
**Response to "changing the ratio of positive sample pairs (reducing negative sample pairs)":** Reducing negative sample pairs could help the loss focus on minimizing the intra-class variance; however this comes at the cost of the inter-class variance decreasing due to the imbalanced training strategy. We use our GE2E loss simulation to demonstrate this: all configurations and parameters are kept the same as those from the simulation in Fig. 1 in the manuscript, except we add a scalar to the negative pairs score in the GE2E loss to simulate the impact of reducing negative sample pairs during training. We set the scalar to 0.25, i.e., the ratio of negative sample pairs to positive sample pairs is 1:4. The results of our experiment are shown in Figure A (available in the global response PDF). The figure demonstrates that the trajectory of the optimal solution is in the direction of decreased inter-class variance. This is clearly in an undesired direction as a good regularizer promotes lower intra-class variance, but higher inter-class variance. Comparing this solution to the one from the ICC in Fig. 1 in the manuscript, we clearly see that the ICC does both; however simply changing the ratio of samples between the two classes only reduces the intra-class variance.
**Response to "experiments and details of experimental setup":** We will add these in the revision. Please see the **Global Response**.
---
Rebuttal Comment 1.1:
Title: Lowering the ratio of negative sample pairs
Comment: Thank you for the response and some new results. I understood that on the Monte Carlo simulation lowering the ratio of negative sample pairs (adding the negative pairs score) on GE2E loss provides undesirable impact for the problem setting in this paper. I assume that you have done the simulation with the same ratio between positive and negative sample pairs during an entire stage of training (in other words, all the epochs). However, the appropriate ratio of positive and negative sample pairs for each downstream task should be different. It can be estimated with a development set. In that sense, it is still unclear and my concern if the proposed method yields sufficient improvements over such baseline systems in actual downstream tasks.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer Qvpp for engaging with our rebuttal and posing a new concern.
**In Reviewer Qvpp's initial review,** the reviewer was concerned that "_the proposed ICC regularizer corresponds to the special case of contrastive loss focused only on positive sample pairs._" Our rebuttal clarified that this was not the case as the proposed regularizer requires both positive and negative pairs. It is impossible to implement using only positive samples. The reviewer further questioned "_What are the advantages that cannot be obtained by just changing the ratio of positive and negative pair data?_" We showed via simulation that the loss surface achieved by intentional class imbalance can result in optimal solutions in the direction of decreased inter-class variance; clearly an undesirable outcome.
**In Reviewer Qvpp's most recent review,** the reviewer suggests that we should have optimized the imbalance ratio in our baseline results via a development set: "_However, the appropriate ratio of positive and negative sample pairs for each downstream task should be different. It can be estimated with a development set._"
In a literature review, we found no evidence to suggest that using an imbalanced ratio of positive/negative pairs can enhance model performance over a balanced baseline. On the contrary, much of the research (e.g., Wang et al., 2021; Dorigatti et al., 2022; Vito et al., 2022) concentrates on methods to intentionally balance this ratio, particularly in cases of imbalanced datasets.
Given:
- our empirical results we provided last time that demonstrate the negative impact of class imbalance in contrastive loss;
- no found evidence in the literature that deliberate class imbalance in contrastive loss improves performance;
- our demonstration last time that the proposed regularizer is different from the special case of contrastive loss focused only positive pairs,
we do not see any reason to believe that an asymmetric contrastive loss should be considered as a baseline over the standard balanced approach. We have not come across any papers that use an asymmetric contrastive loss as a baseline. Furthermore, we have not come across any papers that use cross-validation to set the imbalance ratio for optimizing performance.
(Wang et al., 2021) - Wang, Peng, et al. “Contrastive learning based hybrid networks for long-tailed image classification.” _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_. 2021.
(Dorigatti et al., 2022) - Dorigatti, Emilio, et al. “Robust and Efficient Imbalanced Positive-Unlabeled Learning with Self-supervision.” _arXiv preprint arXiv:2209.02459_ (2022).
(Vito et al., 2022) - Vito, Valentino, and Lim Yohanes Stefanus. “An Asymmetric Contrastive Loss for Handling Imbalanced Datasets.” _Entropy_ 24.9 (2022): 1303. | null | null | null | null |
Hierarchical Adaptive Value Estimation for Multi-modal Visual Reinforcement Learning | Accept (poster) | Summary: This paper presents a new vision-based reinforcement learning algorithm for autonomous driving scenarios. In previous feature fusion methods, a single critic could amplify dominant features and obscure other features. To address this issue, the authors propose a local critic for each feature and combine them into a global critic using attention fusion. Additionally, the authors introduce a task-contextual re-fusion component to rebalance the estimator. The algorithm achieves promising performance on the CARLA driving task.
Strengths: 1. The writing in the paper is easy to follow, and the figures are clear and understandable.
2. The proposed algorithm has a simple and effective structure. The improvements made to existing methods in the framework section are intuitive.
3. The argumentation in the "Further Analyses" section of the paper is thorough and provides a good explanation for the potential sources of performance improvement brought by the algorithm.
Weaknesses: 1. The decomposition of the Q-network lacks novelty. Although this paper focuses on vision-based RL, the approach of using a separate critic for each modality and obtaining a global critic using self-attention is similar to some existing methods in the field of multi-agent RL[1]. The paper does not adequately highlight the differences in information processing for vision-based tasks compared to other types of input.
2. The central contribution of the paper lies in the proposed framework. However, apart from the front-end modality fusion and task-specific re-balance, all the content mentioned in section 3.5 appears to be supplementary to existing research[2][3] rather than original improvements. Therefore, I believe there is a lack of substantial contribution in the paper.
ref
[1] QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
[2] Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.
[3] Deepmdp: Learning continuous latent space models for representation learning
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: In Section 4.2, I noticed that the authors mentioned, "Since TransFuser and EFNet only support two modalities, we directly compare our method with SAC, DrQ, and DeepMDP." This suggests that these two baseline algorithms are not suitable for the task environment used in the current paper. Then why use them as strong baselines to compare?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The experimental setup is not sufficiently fair. Although the baselines used for comparison include newer multimodal vision algorithms, they were originally designed for traditional visual tasks. In the tasks used for algorithm validation in this paper, there is a lack of a strong baseline that specifically demonstrates the superiority of the proposed algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your thoughtful review of our paper. Your positive remarks on our writing and analysis are truly appreciated, and we feel encouraged by your feedback. Regarding your concerns about the novelty of our work, we sincerely apologize if our presentation did not sufficiently highlight our unique contributions. We value your feedback and hope the following responses can clarify and address any misunderstandings that might have arisen.
> 1. lacks novelty compared with QMIX.
Thank you for your comments. Your observation regarding the similarity between HAVE and multi-agent is incisive. We kindly ask for your patience and attention as we delve into a more detailed explanation below.
Specifically, HAVE intends to answer: “What is the appropriate paradigm to determine state-action values considering the modality-specific contribution?” Through harnessing resources of the RL field itself, we observed a similarity between the roles of “agent'' in multi-agent RL and “modality'' in multi-modal RL: both exhibit distinctive contributions to the final reward and require effective cooperation to optimize the global decision. This resemblance highlights a shared principle across different domains within RL, which by itself is a new kind of discovery not mentioned by existing works.
However, the devil is in the details. The subtle yet critical aspects, when examined closely, differentiate our method from the multi-agent ones. Particularly, our method aims to solve totally different problem compared with QMIX :
1. QMIX is proposed for multi-agent cooperative learning tasks where multiple agents share a global or collective reward. QMIX aims to decompose the global Q value for each agent's action, which is further used for training each agent's policy.
2. The proposed HAVE aims to solve “How to predict Q value of a unified action from a single agent considering the modality-specific contribution under inconstant environment dynamics?” Different with the general multi-modality supervised learning method, the Q value doesn't have true groundtruth during learning, and the current learning goal of Q is related to the current critic. Hence, the feature-level fusion is inadequate, and it is necessary to study how to estimate more accurate Q value by fusing multi-modality estimators. Therefore we have hierarchical and dynamic value fusion. In contrast, QMIX does not consider either the choice of fusion paradigm or bi-level fusion of values.
> 2. ...all the content mentioned in section 3.5 appears to be supplementary to existing research (SAC[2] and DeepMDP [3]) rather than original improvements.
Our method focuses on multi-modality reinforcement learning on visual control tasks. SAC and DeepMDP are two well-known and classical baselines on single modality reinforcement learning (RL) and visual presentation learning of RL. It is reasonable and general to study our problem based on them. In fact, there are many approaches that take them as the baseline, such as:
[1] Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. ICLR. 2020.
[2] Curl: Contrastive unsupervised representations for reinforcement learning. ICML.2020.
[3] Stabilizing deep q-learning with convents and vision transformers under data augmentation. NeurIPS. 2021.
[4] Learning invariant representations for reinforcement learning without reconstruction. ICLR. 2021.
[5] Masked contrastive representation learning for reinforcement learning. TPAMI 2022.
[6] Information Optimization and Transferable State Abstractions in Deep Reinforcement Learning. TPAMI 2022.
[7] Dreaming: Model-based reinforcement learning by latent imagination without reconstruction. ICRA. 2021.
[8] Isolating and Leveraging Controllable and Noncontrollable Visual Dynamics in World Models. NeurIPS. 2022.
[9] Denoised mdps: Learning world models better than the world itself. ICML. 2022.
[10] Reinforcement learning with action-free pre-training from videos. ICML, 2022.
[11] Masked world models for visual control. CoRL, 2023.
SAC is used as the baseline in [1,2,3,4,5,6], and DeepMDP is used in [7,8,9,10,11].
> 3. Why use TransFuser and EFNet as strong baselines to compare?
We apologize for the confusion caused by our previous statement in Sec. 4.2. Our statement was primarily grounded in concerns about computational complexity and practical implementation. Although it's technically possible to scale both methods to accommodate more than two modalities, such an adaptation would require significant modifications to the implementation, along with a quadratic increase in computational complexity due to cross attention mechanism.
In addition, most of the experiments in our paper adopted two modalities. Since EFNet and TransFuser are proven very robust with paired modalities, we use them as strong baselines for our comparison following [1-3] and only omit them in our three-modality tests, which is only a small portion of the total experiments.
[1] Safety-enhanced autonomous driving using interpretable sensor fusion transformer. PMLR 2023
[2] Plant: Explainable planning transformers via object-level representations. CoRL 2022
[3] Rgb-event fusion for moving object detection in autonomous driving. ICRA 2023
>4. The experimental setup is not sufficiently fair.
Thanks for your suggestion. We introduced a new baseline named MuMMI [1], which is also designed to manage multiple sensors in reinforcement learning scenarios. The results are detailed in Table 3 of the rebuttal PDF. Relative to other methods, HAVE has a pronounced advantage in performance.
[1] Multi-modal mutual information (MuMMI) training for robust self-supervised deep reinforcement learning. ICRA. 2021.
---
Rebuttal Comment 1.1:
Title: Additional information on novelty and contribution of HAVE
Comment: Considering you might have further concerns regarding the novelty and contribution of our work, we offer additional information here. This comment focuses on **detailed technical differences**, as well as the **inspirations, thoughts, and rationales** behind HAVE.
We first give a brief comparison of multi-agent RL (MARL) methods (e.g., QMIX) and HAVE in the following table:
| Method | QMIX | HAVE |
|:---|:---|:---|
| **Problem Focus** | Addresses decentralization of actions with a global reward during training. | Leveraging information from different sensors/input sources to take a single action. |
| **Challenges** | Assigning collective Q values to each agent having individual observations and actions. | Modeling a single accurate Q value from multi-modal perceptual data for a united action. |
At a high level, these will serve as the guidelines for our individual technical design. Going deeper into the details, we believe the following points call for emphasis:
### **1. Varied technical nuances and lines of reasoning**
As you may have discovered, QMIX uses a $\textcolor{red}{\text{monotonic hypernet}}$ to maintain monotonicity between individual agent’s Q values and global Q value, adhering to the Individual-Global-Max (IGM) property [1] (i.e., $\frac{\partial Q_{tot}}{\partial {Q_{a}}} \geq 0, \forall {a} \in A$) in MARL. IGM ensures decentralized policies align with centralized ones during local greedy actions. However, **it is not applicable to multi-modal RL** because 1) modalities collaborate to decide a final action, rather than determining actions greedily, and 2) Estimated Q values may be affected by gaining/losing vital information due to modality characteristics and sensor noises, which breaks monotonicity. For instance, while a red traffic light may not be captured by event sensors and results in a high estimated Q value on thrust, the global Q value should decline in such cases. Therefore, although obtaining a separate critic for each modality is a first step choice, how to make the best of them remains unknown due to the inherent difference between MARL and our task.
To crack this, interactions between modalities and their relationships with the global modality are crucial. However, $\textcolor{red}{\text{QMIX's hypernets do not explicitly model these local-global interactions}}$. Furthermore, we also aim to avoid modality dominance as stated in Remark 1 in Sec.3.6 of the paper. To achieve these, the LVE paradigm in HAVE adopts $\textcolor{green}{\text{cross-attention between Q values and the local/global modality features}}$ with better results. While the attention mechanism still enforces IGM due to Softmax's non-negative weights, HAVE's $\textcolor{green}{\text{task-level re-fusion}}$ employs $\textcolor{green}{\text{non-monotonic hypernets }}$ to re-balance the values from LVE and GVE based on environmental rewards. We do not enforce positive weights from the hypernets, thereby liberated from the IGM constraint between the final fused value and individual modality values. Such a bi-level process **jointly archives 1) the prevention modality dominance and 2) the discovery of opposing trends between individual Q values and global Q value**, which cannot be achieved by a single step of value decomposition and merging.
To our knowledge, no existing MAML method adopts such a bi-level fusion with the IGM constraint lifted. Meanwhile, current multi-modal RL methods also overlook modality-specific values. Therefore, we believe we have contributed an efficient multi-modal RL framework by re-thinking modality relationships with a novel learning paradigm.
It should also be noted that attention and hypernets are both classical techniques widely adopted by researchers, while the motivation, reasoning and implementation details can vary greatly for each method.
### **2. MARL methods will NOT work well directly on multi-modal RL (+Evidence)**
As we analyzed above, the MARL methods with IGM constrain may not apply to our task. During the exploration of our work, we tested QMIX directly on multi-modal RL. However, the results were quite unsatisfactory (See the performance table below on CARLA). The results demonstrated that solely decomposing critics for each modality alone is not enough for multi-modal RL. We release these results to clarify that HAVE is a standalone method with thoughts and trials and errors behind it, which is not simply a technical re-use of MARL methods.
| Scenario | QMIX | Ours-LVE | Ours-HAVE |
|-----------|---------|----------|-----------|
| Clearnight| 231±85 | 274±68 | 319±71 |
| Clearnoon | 264±78 | 294±82 | 336±76 |
Reference:
[1] QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning, ICML 2019
Please kindly let us know if our responses have addressed your concerns, or if any further clarification is desired.
---
Reply to Comment 1.1.1:
Title: Additional information on novelty and contribution of HAVE (continued)
Comment: To provide a clearer understanding, we offer a diagrammatic comparison between HAVE and QMIX below:
HAVE
---
$Modality_1$↘
$Modality_2$ →Actor→$\textcolor{red}{\text{action}}$
$Modality_n$↗
$Modality_1$,$\textcolor{red}{\text{action}}$ →$Critic_1$→$Q_{1}$↘
$Modality_2$,$\textcolor{red}{\text{action}}$ →$Critic_2$→$Q_{2}$→$Q_l$
$Modality_n$,$\textcolor{red}{\text{action}}$ →$Critic_n$→$Q_{n}$↗    ↘
                      $Q_{fusion}$ →Loss
$Modality_{1,2,...,n}$,$\textcolor{red}{\text{action}}$ →$Critic_{n+1}$→ $Q_{g}$↗
QMIX
---
$Agent_1$→$Qnet_1$→$Q_{11},Q_{12},...$→$\textcolor{red}{\text{$action_2$}}$ choosen for $agent_1$→$Q_{12}$↘
$Agent_2$→$Qnet_2$→$Q_{21},Q_{22},...$→$\textcolor{red}{\text{$action_1$}}$ choosen for $agent_2$→$Q_{21}$→ $Q_{tot}$→Loss
$Agent_n$→$Qnet_n$→$Q_{n1},Q_{n2},...$→$\textcolor{red}{\text{$action_j$}}$ choosen for $agent_n$→$Q_{nj}$↗
In conjunction with the provided diagram and the prior statements, it should be clear that HAVE and QMIX serve as distinct frameworks, catering to varied tasks and objectives.
Within the comprehensive framework, the fusion of multi-modal Q-values bears resemblance to the multi-agent fusion approach in QMIX. Nonetheless, even when examining the operations of this specific step, our approaches vary in multiple aspects:
(1) Weighting: While QMIX adheres to the IGM principle and ensures positivity, our modalities don't encounter this issue. In our case, a modality's prediction can occasionally be erroneous or negative, so we don't impose such constraints.
(2) Fusion Mechanism: In our approach using a cascading technique, the fusion of Q-values predicted among individual modalities and the fusion of Q-values estimated between single modality and fused modality features do not operate at the same hierarchical tier (for a detailed explanation, refer to section 3.4 of the main paper and the second paragraph of section B in the supplementary materials
). In contrast, QMIX operates without the need for cascading and all agents participate in fusion at a uniform level. In the performance table of our last comment, we replaced our fusion approach with the QMIX fusion method for the experiment, the inferior performance of QMIX-style fusion further highlights the advantages of the cascading approach of HAVE.
---
Rebuttal 2:
Title: final discussions
Comment: Dear Reviewer,
As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion.
Please note we take note of unresponsive reviewers.
Best regards,
\
SAC | Summary: The authors propose a Local modality-customized Value Estimation (LVE) paradigm, which dynamically estimates and adjusts the importance weight of each modality from the perspective of value-level to compensate for the multi-modal vision-based RL methods that usually use fused modality features for learning Global Value Estimation (GVE), which may be insufficient in policy learning. Furthermore, the authors propose a task-contextual re-fusion procedure to achieve a task-level rebalancing of estimates from feature and value levels. Experimental CARLA benchmark results show the improvement of multi-modal vision-based autonomous driving RL tasks.
Strengths: - The originality and novelty of the paper are solid. Inspired by some ideas in algorithms in Multi-Agent RL, an ingenious weighted local value estimation method suitable for multi-modal RL is proposed. And a task contextual re-fusion process is designed to combine better global value estimation and weighted local value estimation to obtain better multi-modal value estimation.
- The paper is written with high quality. The writing structure is clear and organized, the sentences are fluent and smooth without grammatical errors, and the language style is simple and easy to understand. Although there is no rigorous theoretical analysis, it gives some interesting insights from mathematics. In addition, the code and appendices are well documented and provide useful analysis and explanation.
- The significance of the paper is fine. Developing better algorithms to solve multi-modal RL tasks is an important research topic.
Weaknesses: - The experiments for this paper are still not adequate. The entire manuscript has always only conducted experiments on an autonomous driving environment CARLA, and it is difficult to evaluate whether HAVE is suitable for other broader multi-modal RL environments (such as simulated robot control). In addition, this work only conducts experiments on tasks of multiple visual modalities and does not consider more common modal inputs such as text and voice.
- The paper lacks further theoretical analysis. Although two interesting remarks are in Section 3.6 of the manuscript, the theoretical insight obtained is still relatively shallow. Lack of more in-depth theoretical analyses, such as the overall convergence of the algorithm and whether the algorithm optimizes some lower bounds under the multi-modal RL setting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - The baselines compared in this paper are still not representative and pertinent. This paper mainly focuses on the multi-modal RL task of visual input, but there is no comparison with the classic Vision-based RL work, and there are more suitable baselines than SAC, DrQ, and DeepMDP. For example:
- Visual Reinforcement Learning with Imagined Goals, NeurIPS 2018
- Improving Sample Efficiency in Model-Free Reinforcement Learning from Images, AAAI 2021
- Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation, NeurIPS 2021
- Observing Figure 3 and Figure 4, it seems that the training curves have not yet converged. If the training process is extended, would HAVE be surpassed by other comparative methods?
- Why do TransFuser and EFNet only support two modalities, as mentioned in article 4.2?
- Would using neuromorphic event signals lead to totally fair comparisons? The algorithm's performance may be related to the peculiarity of the modality itself. For example, TransFuser may have its advantages when using the Bird’s Eye View (BEV) modality, but it may be challenging to have a good command of neuromorphic event signals
- Different local areas of each modality could also have different importance. Can HAVE balance the value of this finer-grained "modality"? And how?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors explained the limitations of their work well. For example, this work only conducts experiments on RL tasks of multiple visual modalities and does not consider more extensive multimodal input such as text and voice. In addition, the authors explained their potential negative social impact in the appendix. I do not see an obvious negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for your time and effort invested in reviewing our work. Your positive feedback has been both encouraging and instrumental for the further improvement of our paper. Herein, we address the points raised in your comments:
>1. The experiments for this paper are still not adequate due to only with CARLA.
Thanks for highlighting this. Following your advice, we have conducted experiments on an additional environment (DeepMind Control Suite for robot control). Please refer to our general response posted above for detailed results. Experiment results show that HAVE also works effectively in the new environment.
>2. The paper lacks further theoretical analysis.
Thank you for pointing out the need for a deeper theoretical analysis. We recognize the importance of a comprehensive theoretical foundation. The remarks in Section 3.6 were our initial efforts to provide theoretical insights into our approach. However, despite our earnest efforts to provide more theoretical proof, we encountered challenges that prevented us from establishing definitive results at this stage given the constraints of our current rebuttal timeline.
We believe the empirical results presented do offer a validation of our method's effectiveness in the multi-modal RL setting. We sincerely value your feedback and, in future iterations of this work, aim to delve deeper into the theoretical aspects you've highlighted. Incorporating such a comprehensive analysis would undoubtedly strengthen our work, and we're committed to pursuing this in subsequent research.
>3. The baselines compared in this paper are still not representative and pertinent.
Thanks for the valuable suggestion. In Table 3 of our uploaded rebuttal PDF, we give additional results of three baselines. Two of which are based on your suggestion (SAC+AE [1] and SVEA [2]) and the rest (MuMMI[3]) is a multi-modal vision-based RL method. Compare with Table 1 in the main paper, we see that HAVE still possesses advantages over these baselines.
>4. Observing Figure 3 and Figure 4, it seems that the training curves have not yet converged.
Thanks for this astute observation. We conducted further experiments focusing on the figures that seemed to have the most serious convergence issues, namely Fig. 3 (left) and Fig. 4 (bottom left). For each figure, we evaluated two methods that appeared to be the least converged. The findings, presented in Fig.2 (b) and Fig.2 (c) of the rebuttal PDF, indicate convergence by 150K steps, with HAVE consistently outperforming the others. In subsequent revisions, we plan to present results up to 150K steps for all experiments.
>5. Why do TransFuser and EFNet only support two modalities?
We apologize for the confusion caused by our previous statement in Sec. 4.2. Our statement was primarily grounded in concerns about computational complexity and practical implementation. For clarity, the cross-modality attention mechanisms embedded within EFNet are designed for paired modalities. In this design, one modality calculates the query tensor, while the other calculates the key and value tensors. We also observed that only two modalities are utilized in both the Transfuser paper and its official implementation. Although it's technically possible to scale both methods to accommodate more than two modalities, such an adaptation would require significant modifications to the implementation, along with a quadratic increase in computational complexity. For instance, with EFNet, a cross-modal attention operation must be performed for every modality pair. In comparison, the complexity added by extra modalities only scales linearly with HAVE. We will revise our statement to articulate this more precisely and prevent further misunderstandings.
>6. Would using neuromorphic event signals lead to totally fair comparisons?
We fully understand your concern. Indeed, it can generally be difficult to disentangle the contribution of an architecture’s modality compatibility from the effectiveness of its multi-modal learning mechanism. To maintain as fair a comparison as possible, for TransFuser and EFNet, we utilized the same observation encoder as with HAVE. With EFNet, we can safely assume there is no compatibility issue, as it is specifically designed for event signals and has proven to be highly effective. TransFuser was originally designed for LiDAR BEV and RGB. However, the architectures of TransFuser and EFNet share considerable similarities (both are transformers with cross-modality attention), and LiDAR BEV has characteristics similar to event frames (e.g., sparsity). Based on these observations, we conjecture that the influence of compatibility might be negligible. In our experiments, TransFuser outperforms EFNet in certain environments, which supports our conjecture. For a more comprehensive comparison of the modality generalization ability of HAVE and TransFuser, we also tested them in DeepMind Control Suite under robot control environments, where the inputs are RGB and depth. Experimental results show that HAVE maintains its advantage over TransFuser.
>7. Can HAVE balance the value of this finer-grained "modality"?
Yes, it can. Specifically, the LVE in HAVE works by maintaining separate critic networks for each modality, allowing it to estimate individual state-values. Each critic in HAVE can therefore assign importance to local areas within its modality by activating only significant regions in the feature maps. As an illustration, we visualize the heatmap of different critic networks in Fig.2(f) in the rebuttal PDF, which aligns with our analysis.
**Reference**
[1] Improving Sample Efficiency in Model-Free Reinforcement Learning from Images, AAAI 2021
[2] Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation, NeurIPS 2021
[3] Multi-modal mutual information (MuMMI) training for robust self-supervised deep reinforcement learning." ICRA. 2021 | Summary: This paper proposes a fusion scheme for multi-modal RL, with a particular focus on fusing the modalities in estimating the value function. The authors propose a combination of global value estimation, local value estimation and another conditioned fusion of the global and local value estimation. The approach is evaluated in particular on fusing RGB and event-based data in an autonomous driving scenario with the CARLA simulator.
Strengths: The paper proposes a novel scheme of "value level fusion" that is new to me.
Weaknesses: - The authors propose a very complex scheme for fusing the q-value estimate based on the different modalities, having GVE, LVE and then another fusing of those two with a hyper network. Especially since the policy is selecting actions based on a simple concatenation of feature vectors of both modalities.
- There is a very high spread on the results which makes me wonder how significant the improvement is of the HAVE approach. Also, from Fig 3 and 4, it seems that the other methods are not necessarily converged yet. It would be interesting to see how the curves evolve if training would continue.
- All experiments are done on this single environment with a custom reward function. It would be more convincing if the approach was demonstrated on one extra environment, i.e. it is hard to assess how significant it is to drive for instance 42m further than another method (+/- 52m) to take the first line of Table 1 as an example.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Why would the policy benefit from all the fusion in the value function if it is a neural network only taking the global modality feature as input?
- From the results in Fig 5 it seems bulk of the contribution of the value is from the RGB frame, as its weight never goes below 0.75?
- The only difference between the proposed approach and the baselines is the value function estimation. I would wonder whether the architecture yields a better value function (i.e. closer to the ground truth value) faster, which yields the improved training of the policy. However, if in the long run, any value iteration converges to the true value function, it should yield the same policy, no? I would be interesting to inspect the MSE of the value function compared a ground truth value estimate for the different algorithms over time, to see if this is indeed the case.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors only mention the limitation to one set of modalities in their work. I would make this broader to note that the approach is only tested on this one environment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly thankful for the thoughtful remarks and the experimental recommendations you have provided. These suggestions shed light on what we can improve, and we believe they will be instrumental in refining our work further. We address your main concerns as follows:
> ...fusing the q-values is complex. Why would the policy benefit from this if it only takes global modality feature as input?
Thanks for the great question. First, we wish to emphasize: it is the value function that provides supervision signals for the policy network. Therefore, the factors that influence the policy network are two-fold: 1) accurate value estimation, and 2) robust input features. Our paper primarily addresses the first, using multi-modal strategies for better Q-value estimations. When the Q-value function is accurate, it enriches the feedback to the policy by reliably estimating future rewards for potential actions. This is the reason behind our hierarchical design, which balances individual modality contributions for optimal Q-values.
As for the input features, our work shows that the robust Q-values estimated by HAVE can already drive the policy network to select predictive features from only the global modality feature. While we could introduce a complex modality fusion module, its success still hinges on precise Q-values, which drive system learning. Thus, we prioritized Q-value estimation. However, your question raises a novel and natural extension of our work: to develop proper feature selection mechanisms for the policy network. We sincerely appreciate this and will definitely work on it in the future.
> ...very high spread on the results. How significant is the improvement? ... seems the other methods are not necessarily converged yet.
Thanks for highlighting this. Result variance is due to initialization randomness, a known RL challenge (cf. experiments in [1],[2]). We averaged results from five runs with different seeds for Fig. 3 and Fig. 4 following common practice [1][2]. The results reveals HAVE's superior performance by 120K steps. Some curves might not seem converged at 120K. With limited computation resources during rebuttal, we re-run experiments on the most questionable non-converged figures (i.e. Fig. 3 (left) and Fig. 4 (bottom left) ), each with the two most suspicious non-converged methods. The results are given in Fig.2 (b) and Fig.2 (c) in the rebuttal PDF, respectively. These results show convergence around 150K steps, with HAVE still leading. In future revisions, we'll extend all experiments to 150K steps.
> It would be more convincing if the approach was demonstrated on one extra environment.
Thanks for your valuable suggestion. We have conducted more experiments on an additional environment DeepMind Control Suite for robot control. Please refer to our general response posted above for detailed results. Experiment results show that HAVE also works positively in the new environment and can improve over other methods.
> ...the contribution of the value from the RGB frame...never goes below 0.75?
Thanks for this keen observation. A higher weight to RGB signals over event signals can be attributed to two factors:
1) Content richness. Compared to events, RGB images contain substantially more information, especially static ones like colors and textures. Such information is important for decision-making. Event signals, on the other hand, mostly reflect edges that are moving. In our demonstration, the RGB frames are clearly discernible by the agent. As a result, they dominate the modality contribution with abundant content. In Fig.2(a) of the rebuttal PDF, we provide the modality weight under extremely low-light conditions where the RGB frames are barely observable, we see the event signals play a more important role in such situations.
2) Event signals are derived from discrete intensity shifts, which can introduce temporal noise and affect data reliability. Meanwhile, RGB signals offer a smoother, more consistent environment depiction, reducing noise concerns.
> Whether the architecture yields a better value function faster, I would be interesting to inspect the MSE of the value function compared a ground truth value estimate for the different algorithms over time.
We provide the MSE curve in the ClearNight weather of CARLA, please see Fig.2(e) in the rebuttal PDF. The ''ground truth'' depicted in this figure is computed using the Monte Carlo method, which calculates the cumulative reward from the current state to the end of the test episode. From Fig.2(e) we see that although all methods tend to reduce their value error. Their MSE is closer along training, yet HAVE still outperforms all methods at the end of training. It is notable that such a ground truth value can not be obtained beforehand during training, so this is just for reference and the rigorous analysis needs to further explore.
RL always uses Temporal Difference (TD) learning to update network, where the learning goal of value functions is updated by collective rewards and current value estimators (i.e., critic network). That is, there is no real ground truths for Q value during training. We cannot train a good value estimator just by a long-time training because due to accumulate errors. Hence, it is very important to design a better value estimator, which is our main goal.
**Reference**
[1] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, ICML 2018.
[2] Image augmentation is all you need: Regularizing deep reinforcement learning from pixels, ICLR 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their responses, and I appreciate the additional results provided. I have adjusted my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive support
Comment: We sincerely appreciate your experimental advice and inspiring questions that give us new research directions. We have learned a lot from the review. Thank you again for your effort and supportive feedback. | Summary: The paper proposes a Hierarchical adaptive value estimation (HAVE) framework for multi-modal (RGB, event, depth) visual reinforcement learning. Firstly, a modality-specific value function learning process is proposed, and an assignment module is proposed to weight different modality. Second, a re-fusion is proposed to futher combine global feature based value and modality-specific based value. Method is evaluated on CARLA simulation environment. Various baselines, including SAC, DrQ, DeepMDP, etc are compared and the results show the proposed HAVE perform better under different weather conditions.
Strengths: The decomposition of global value function into weighted combination of modality value function is novel and well-motivated by the observation that simple feature fusion might cause dominant modality behaviour. The weight assignment module is well designed and also show meaningful behaviour in Figure 5. The re-fuse further makes the method can enjoy both the value and feature fusion, achieving better results. Results are strong compared to other methods on CARLA across different weather conditions
Weaknesses: 1. The auxiliary loss term in equation 14 is not ablated.
2. Ablation study on the design of re-fusion is missing. Why such dynamic fusion (i.e. predict FC weights and bias) is necessary, will other simple way of fusion work?
3. The introduce of weight assignment and re-fusion leads to increase of #params and #FLOPS, which is not listed in the paper. And further, to show the performance improvement is not coming from increase of #param and FLOPs, ablation study can be designed and conducted to compare baselines with similar #params and FLOPS.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can authors address the concerns in weaknesses part
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As discussed by authors, currently one RGB, event, Depth modalities are considered, other modalities remains unknown. And currently the method is only evaluated on one driving simulation environment, whether it can be applied to other environment or real world is unknown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First and foremost, we would like to express our gratitude for your positive comments. We also deeply appreciate the practical experimental suggestions given in your constructive feedback, which offers valuable perspectives that will strengthen our work. Below are our responses to your concerns:
>1. The auxiliary loss term in equation 14 is not ablated.
We agree that conducting an ablation study of auxiliary losses would elucidate their contributions. We have carried out the corresponding experiments in the ClearNight environment of CARLA. The results are illustrated in Fig.2(d) of the rebuttal PDF. Specifically, three terms in Eq.13 comprise $L_{aux}$: the global transition loss, the sum of individual transition losses, and the reward prediction loss. When compared to the full results of HAVE in Fig.4 (top left) of the main paper, Fig.2(d) reveals that omitting any of these terms negatively impacts model performance. Among them, the reward prediction loss is the most crucial, leading to the most significant performance drop. When all three terms are excluded (i.e., the 'w/o aux' curve in Fig.2(d)), there is a notable decline in performance.
>2. Ablation study on the design of re-fusion is missing. Why dynamic fusion is necessary, will other simple way of fusion work?
Thanks for your good question. This question can be separated into two parts.
On why we need dynamic fusion over static ones (such as taking the mean of GVE and LVE values), since LVE does not always outperform when GVE provides accurate estimates, and conversely, GVE does not always excel when LVE estimates are precise. Their effectiveness is contingent upon the attributes of the current global modality, which can vary drastically depending on the environmental context. To address this variability, we employ a hypernetwork for a dynamic combination of both methods. While we have experimented with directly merging GVE and LVE, Table 1 in the rebuttal PDF indicates that, although direct fusion is feasible, the dynamic fusion through the hypernetwork proves more efficacious.
On why we need a hypernetwork to perform dynamic fusion, the rationale behind employing hypernetworks is to avoid directly putting the global modality feature $f_t^g$ into the mixing network. This approach is taken because ultimately, we want the aggregated value benefits from the additional global modality information in $f_t^g$ but not overly influenced by it. If we directly put $f_t^g$ into the mixing network, the output value will arithmetically depend on it. This could impose unnecessary fusion difficulty since $f_t^g$ may vary significantly across different environment situations. Meanwhile, the re-fusion process is not strictly a task-level value fusion anymore because the feature $f_t^g$ has actively engaged in the fusion process. Instead, the utilization of hypernetworks offers the ability to condition the mixing network's weights on $f_t^g$ in a versatile manner while not overly constraining the fusion process.
Hypernetworks generate weights for primary networks, allowing for dynamic adaptation, parameter efficiency, and conditional computations, making them versatile tools for various tasks[1,2,3].
[1] A Brief Review of Hypernetworks in Deep Learning. arXiv preprint arXiv:2306.06955 (2023).
[2] Principled weight initialization for hypernetworks. ICLR. 2019.
[3] Continual learning with hypernetworks. ICLR. 2020.
>3. Increased #params and #FLOPS are not listed in the paper. Ablation studies should be designed to compare baselines with similar #params and FLOPS.
Thanks for your suggestion. A comparison of baselines with a similar model complexity is indeed instructive. To address this, in the rebuttal PDF's Table 2, we outline the number of parameters (#params) and FLOPS for HAVE, TransFuser, and EFNet. A few key observations emerge:
1. All three methods share the same observation encoder architecture.
2. As a value estimation method, HAVE directly concatenates modality features and doesn't incorporate advanced feature fusion modules. Conversely, TransFuser and EFNet introduce significant #params and FLOPS with their attention-based modality feature fusion modules.
3. The critic network in HAVE includes weight assignment and re-fusion networks, adding to its complexity. Nonetheless, this added complexity is modest when juxtaposed with the feature fusion modules of TransFuser and EFNet. Overall, HAVE still achieves the best result with the least #params and FLOPS. It is also worth noting that the critic is utilized only during the training phase and not used in testing. Thus, there is no actual increase in either the parameter counts or FLOPS for HAVE at the test stage.
---
Rebuttal Comment 1.1:
Title: Thank authors for the response
Comment: Thank authors for the response, the detailed responses have resolved most of my concerns.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback
Comment: We deeply appreciate the time and effort dedicated to your detailed and informative review. We will incorporate the experimental advice in our revised manuscript. Thank you again for your support and valuing of our work. | Rebuttal 1:
Rebuttal: **To All Reviewers (Additional Experiments on New Environment)**
We thank all the reviewers for their valuable time and insightful feedback. In this general response, we would like to address the concerns about the effectiveness of our HAVE in environments other than autonomous driving. Specifically, we conduct additional experiments using the Mujoco-powered **DeepMind Control Suite** [1] with RGB and depth as input modalities on two challenging tasks (Cheetah, run and Walker, walk). The training and testing curves of HAVE and other comparable methods are given in Fig.1 in the uploaded one-page rebuttal PDF. From the results, it is clear that HAVE also outperforms other methods in these robot control tasks.
**Reference**:
[1] dm_control: Software and tasks for continuous control, Software Impacts, 2020
Pdf: /pdf/15717bb0c40b8bb24a0654d1a926e79115eeef67.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Jailbroken: How Does LLM Safety Training Fail? | Accept (oral) | Summary: This paper offers an insightful examination of adversarial misuse, or "jailbreak" attacks, against large language models (LLMs) such as OpenAI's GPT-4 and Anthropic’s Claude v1.3. By analyzing two proposed failure modes of safety training—competing objectives and mismatched generalization—the authors provide an empirical study of why these attacks can succeed and how they might be created.
Nevertheless, this paper could be critiqued for a lack of novel methodological or technical insights.
Strengths: This paper provides a comprehensive examination of the vulnerabilities in large language models (LLMs) to "jailbreak" attacks.
Weaknesses: I enjoyed reading the extensive experimental results and inspiring observations. The main reason I did not raise a higher score is that the paper seems to lack rigor in defining key concepts and quantifying its statements.
For example, the paper did not precisely define what a jailbreak attack is in the context of LLM. It is unclear what is safe/unsafe since that can be quite subjective. In making "analysis", e.g., "Competing Objectives" in Section 3.1, the paper only provided selected examples rather than formalism.
I feel the empirical studies are not sufficient to publish because there have been many existing empirical studies on jailbreaks and this paper does not distinguish itself from the rest with more in-depth formalism.
However, I could be biased. So I can adjust scores after reading other review comments and authors' responses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: none
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: technical insights or novel method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review of our paper, and for pointing out areas of concern. Here's our response to the issues you raised:
1. **Definition of jailbreak attacks.** We would like to highlight that we present a formal threat model for jailbreak attacks in Section 2.1 in terms of restricted behaviors. On the other hand, a completely mathematical definition of restricted behavior may be an unreasonable expectation, since judging harm (or other undesirable LLM output) is inherently subjective. Nonetheless, the potential of LLMs to cause harm is well documented [1, 2] and is a concrete concern for the deployment of such systems, especially as their capabilities improve in the coming years.
To minimize the degree of subjectivity in our study, we defer the definition of harm to the model creators and follow a clear labeling scheme, detailed in Appendix B. In particular, we find that the tested models refuse to answer almost all of the harmful prompts without jailbreaking (see the performance of the "none" attack in Tables 1 and 3). This, along with the fact that the curated dataset was drawn from red-teaming efforts of the model creators, suggest that these prompts were considered harmful by the model creators.
2. **Related work on jailbreaks.** We are not aware of any empirical studies on jailbreaking language models of a similar nature as ours at the time of submission. If you know of any related work we may have missed, please let us know, and we will be sure to cite and discuss them.
3. **Conceptual contributions.** Finally, we would like to emphasize that our conceptual contribution of identifying the failure modes of existing methods goes beyond our empirical study. We believe these insights will push the field forward, as the weaknesses we identify cover almost all known attacks and highlight why current methods fail systematically. Indeed, as part of our responsible disclosure process we shared our results with affected model creators (OpenAI and Anthropic), and their feedback indicated that our results were not only novel, but also interesting and valuable to them.
[1] https://arxiv.org/pdf/2112.04359.pdf
[2] https://arxiv.org/pdf/2303.08774.pdf
---
Rebuttal Comment 1.1:
Comment: The rebuttal has addressed most of my comments. I will increase the score. | Summary: The paper studies jailbreak attacks. More specifically authors hypothesize two failure modes of LLMs 1. competing objective and 2. mismatched generalization and base their discussion on why jailbreak attacks succeed based on these hypothesis made. They then quantitatively perform experiments on different jailbreak attacks and report the success of each.
Strengths: 1. The paper studies a timely and important topic.
2. The observations/discussion on competing objectives was interesting.
Weaknesses: 1. I would have liked to see more in depth and concrete discussions on what could be done to improve these systems against jailbreak attacks and even maybe the execution of this idea in small scale would have made the paper very strong.
2. The dataset discussion in section 2.2 could be more organized and clear with more detailed discussions to increase clarity.
3. I felt like the results do not directly support claims made in the paper in the sense that jailbreaks happen due to the two reasons 1.competing objectives And 2.mismatched generalization hypothesized by the authors. The quantitative results mostly show the success of the jailbreaks which is a nice finding on its own and mostly known, but they do not directly support the claims that these attacks are successful and happen due to competing objectives and mismatched generalization. Moreover, there might be other contributors not considered in this work. Although authors provide discussions and tie these jailbreaks to the hypothesized reasons in section 3, in my opinion analysis in section 4 does not strongly and directly support those claims. Maybe the paper needs to be motivated in some other way or written in a different manner so that claims are supported and reflected accurately in the experimental discussions and results.
**Minor comments:**
Typo line 288 ( such an attack is technically is beyond the scope of our threat model -> such an attack is technically beyond the scope of our threat model).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Addressing my first and third concerns in the weaknesses section would be good.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors provided a discussion on limitations and broader impacts of their work which is appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and detailed review! Responding to each of the points you’ve raised:
1. **On defenses and small-scale experiments.** Towards better defenses, in Section 5 we argue that successful defenses may need to move beyond the existing pretrain-then-finetune paradigm. In this vein, the approach of Perez et al. to incorporate human preferences *during pretraining* [1] is a promising example that plausibly addresses both failure modes of standard RLHF.
Furthermore, we highlighted white-box studies involving open-source models as a future direction in the Conclusion because no safety-trained open-source large language models existed at the time of writing, so there were no available small-scale models to experiment on. (The first such model, Llama 2 from Meta [2], was released after the end of the NeurIPS review period.) And even with open model weights, there still exists a significant gap between proprietary and open-source datasets/infrastructure for safety training. We nonetheless agree that further investigation with white-box access is an exciting future direction.
2. **On support for claims about competing objectives and mismatched generalization.** We would like to highlight that our ablations (in Section 4) of the simple example attacks (from Section 3) aim to pin down the mechanism of these attacks using only black-box access.
For instance, for prefix injection, we show that the injected prefix matters for the success of the attack. Changing to an innocuous prefix reduces effectiveness significantly. This indicates that the autoregressively decoded prefix plays a key role in determining the refusal (or lack thereof) of the model. This autoregressive decoding behavior is a direct consequence of the pretraining objectives.
And for Base64, we show that Base64 input suffices for a successful attack: just providing the input in Base64 suffices to escape safety training. This demonstrates that instruction following generalizes to Base64 inputs, but safety training does not. Given the success of RLHF, it is very plausible that including examples of Base64 inputs in the safety training set would lead to safety on this category of inputs. These together suggest that it is indeed mismatched generalization that drives the success of this attack.
[1] https://arxiv.org/abs/2302.08582
[2] https://arxiv.org/abs/2307.09288
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing responses to my review. I am still not fully convinced about the discussions provided by the authors on the second point. If authors claim "Given the success of RLHF, it is very plausible that including examples of Base64 inputs in the safety training set would lead to safety on this category of inputs. " I would like to see some experiments on this and actual results to prove the point. I feel like there is lack of experimental and ablation studies to support some of the claims made.
Regarding the comment 1 and it being deferred to future work, again this can add significant value to the paper and since currently it is missing from the current work, I am going to keep my score as is. | Summary: This manuscript provides an initial exploration of the robustness of LLM systems against adversarial misuses, specifically focusing on "jailbreak" attacks. The authors successfully summarize existing threats, propose plausible hypotheses, and conduct empirical evaluations on three LLMs: GPT-4, Claude v1.3, and GPT-3.5. They identify two failure modes in the current safety training: Competing Objectives and Mismatched Generalization. The paper concludes by deriving valuable defense implications from their analysis and evaluations. Overall, this submission initiates a constructive discussion on the development of safe LLM systems.
Strengths: The paper addresses a timely and significant topic. The authors carefully examined the problem, presented their hypotheses, and supported them with well-designed empirical experiments. Helpful defense implications are also provided following the analyses.
Weaknesses: Certain sections of the writing could be refined to improve comprehension for a general audience. For example, the introduction to the current safety training mechanism and the general LLM training framework lacks sufficient detail, which may hinder readers' understanding of the concepts of Competing Objectives and Mismatched Generalization. Furthermore, the significance of the paper is constrained by the absence of direct access to detailed information regarding LLMs they use in their paper.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Section 2.2 primarily focuses on the detailed description of the models and datasets used in the authors' evaluation. Will it be better placed in Section 4.
2. There appears to be some overlaps between Competing Objectives and Mismatched Generalization. For example, both the use of encoded output in Base64 and requesting unusual output formats may exploit prefix injection. Can you better explain the relationship between these two failure modes?
3. The description of the Adaptive Attack in Section 4.1 lacks clarity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The author discussed the limitation arising from the lack of detailed information on LLMs, thus their hypotheses cannot be directly confirmed.
Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and constructive feedback! To respond to the points you’ve raised:
1. **On background on LLMs and safety training.** Thanks for the suggestion—we will provide a more comprehensive review of background in the final version to make the paper more accessible for a broader audience.
2. **Competing objectives vs. mismatched generalization.** You are correct that these two categories are not mutually exclusive—successful attacks often combine strategies that exploit both. On the other hand, our ablation of the Base64 attack shows that encoded input alone suffices to break safety training in many instances. Thus, prefix injection is not a necessary component of the Base64 attack; the fact that the *input* has unusual formatting suffices.
3. **Clarification on adaptive attack.** We appreciate the feedback and will elaborate on the description of the adaptive attack in the final version: “To model an adaptive adversary who selects an attack based on the specific prompt, we implemented a straightforward `adaptive’ attack strategy. We consider this attack successful if any one of the 28 different evaluated attacks succeeds at eliciting an on-topic response to the harmful prompt.” Please let us know if this clarification addresses your question.
---
Rebuttal 2:
Comment: Reviewer, please confirm that you read this rebuttal and adjusted your score and review if appropriate. | Summary: The paper evaluates GPT models and Claude models against Jailbreaks and comes up with two possible explanations for why jailbreaks are successful- 1. competing objectives between pretraining + IF finetuning and safety finetuning; and 2. pretraining+finetuning generalizing better than safety tuning. It also argues for safety-capability parity and that we need to move beyond the pretraining + post-training for safety paradigm to come up with more sophisticated methods for safety training.
Strengths: Strengths:
1. Very clearly and well written; enjoyable read
2. Makes intuitive+compelling hypotheses around why jailbreaks are successful and makes them explicit and clear; provides evidence supporting the hypotheses
3. The safety-capability parity claim the paper makes is important and it's presented well with evidence for why its needed
4. I think the paper is significant given that it makes a compelling argument for why the current paradigm of safety finetuning won't scale and with good evidence for it. The evaluations and hypotheses are quite intuitive + based on existing work but I think the presentation of this work in the paper and the narrative it creates is compelling and important.
Weaknesses: Weaknesses:
1. Evaluations could be more robust and have a higher quantity of prompts
2. The main claims and methods are interesting and well-presented but not particularly surprising / novel
3. It should make clear that the two possible hypotheses are two reasons for jailbreaks but not the entire surface area of possible reasons
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: It would be great if more data is added to evals and the evals are made transparent + available for everyone
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and feedback! We're glad you found the writing and hypotheses compelling. To respond to the points raised:
1. **On the number of prompts evaluated.** We acknowledge the desire for a more extensive set of prompts. However, our choice of 317 prompts for evaluation is consistent with other studies in this area, such as Shaikh et al. [1], who used 200 synthetic prompts. This number of prompts balances statistical power (see Table 2), API usage cost, and labeling effort.
2. **On data sharing.** We understand the importance of transparency and agree that wider access to our evaluation dataset could benefit the research community. Due to the potential for misuse (a concern shared by the ethics reviewers for this paper), we have been sharing our dataset upon request rather than openly releasing it. We are committed to ensuring access to any group with a legitimate research use and have already shared our dataset with several research groups in academia and industry who have reached out.
3. **On possible hypotheses for jailbreak.** You are right in that our mechanisms cover a large fraction, but not all, of the diverse set of known attacks. As in any security setting, there can be a long tail of possible vulnerabilities. We view our contribution as identifying prevalent pitfalls, similar in spirit to how OWASP maintains a list of the top vulnerabilities in web security [2]. We will clarify regarding this in the final version.
Again, thank you for the review, and let us know if you have any further questions!
[1] https://arxiv.org/abs/2212.08061
[2] https://owasp.org/www-project-top-ten/
---
Rebuttal Comment 1.1:
Comment: Reviewer, please confirm that you read this rebuttal and adjusted your score and review if appropriate. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper analyzes why safety training of LLMs fail and what could be the root causes for them. They offer two explanations: competing objectives and inadequate coverage of the model capabilities during safety training. They categorize existing attacks and propose some new ones into these two buckets, and evaluate GPT-4 and Claude v1.3. The results show that both the models are susceptible to jailbreaks.
Post-rebuttal comment: The authors addressed my concerns. I am happy to keep my score of weak accept.
Strengths: I am broadly in agreement with the explanations put forth in this paper. The paper is written quite precisely and in a convincing manner. The experiments show that most of the 317 prompts constructed can be used in combination with one of the methods to successfully jailbreak.
Weaknesses: The paper is mainly about evaluation of the models and the difficulty ahead of the defenses. While it is informative, it would have strengthened the paper to try some experiments and ablations on safety training (e.g., trying to remedy the mismatch in pretraining and safety-tuning in some ways). I can imagine the difficulty of experimenting with large, closed-source models, but some experimentation with small/medium open-source models would have helped.
My other concern is how much of the material in the paper is novel. Several of the existing attacks may have identified these or equivalent root causes and I am unable to clearly estimate how much of the paper's content is novel/surprising.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What specific explanations and observations in the paper are novel when the existing attacks are taken in account?
Do you have any analysis of safety training interventions based on your identification of the key limitations of the current methods?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have discussed the limitations and societal impact. They have also reported that they have disclosed their findings to the model authors and will coordinate the future release of the specific attacks with them to avoid harm. I appreciate these efforts.
Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and feedback! To respond to the points you raise:
1. **On studying open-source models.** We highlighted white-box studies involving open-source models as a future direction in the Conclusion because no safety-trained open-source large language models existed at the time of writing, so it would not have been possible to study them. (The first such model, Llama 2 from Meta [1], was released after the end of the NeurIPS review period.) And even with open model weights, there still exists a significant gap between proprietary and open-source datasets/infrastructure for safety training. We nonetheless agree that further investigation with white-box access is an exciting future direction.
2. **On novelty / existing attacks.** While there are many known jailbreak attacks, our work stands apart in tracing the root causes of these attacks back to the design of training processes. This has let us uncover new attacks, guided by the principles we identify, with our best attacks outperforming those in informal public discourse. Unlike the informal literature, which typically involve a small number of handpicked examples, we provide a quantitative, systematic study of these attacks. Finally, we discussed our results with affected model creators as part of our responsible disclosure process, and their feedback indicated that our results were not only novel, but also interesting and valuable to them.
3. **On safety training interventions.** As discussed in Section 5, our findings suggest that robust solutions may have to come from beyond the existing pretrain-then-finetune paradigm. In this vein, the approach of Perez et al. to incorporate human preferences *during pretraining* [2] is a promising example that plausibly addresses both failure modes of standard RLHF. We view the development and evaluation of such approaches as an exciting domain for future research.
Again, thank you for the review, and let us know if you have any further questions!
[1] https://arxiv.org/abs/2307.09288
[2] https://arxiv.org/abs/2302.08582
---
Rebuttal Comment 1.1:
Comment: The response addresses my concerns, thank you! | Summary: The paper studies jailbreaking of large language models. The authors identify two categories of causes of jailbreaks, competing objectives and mismatched generalization, and use this insight to analyze existing jailbreaks and construct new ones. They empirically study the effectiveness of these jailbreaks on state-of-the-art models that have been trained to refuse unsafe instructions, and draw several conclusions from their analysis.
Strengths: Jailbreaks pose an important safety problem for publicly-deployed AI systems. The conceptual analysis is novel and insightful, and the experiments are careful and thorough. The conclusion about safety-capability parity is especially interesting. Moreover, merely demonstrating extremely successful attacks directly helps companies deploying models to make their models safer via responsible disclosure. The paper is also very well-written.
Weaknesses: The main weakness of the paper, which is easily rectified, is that the authors do not clarify (unless I somehow missed it) which if any models were queried as part of the construction of the attacks, or more broadly if the attacks have any particular intended target model. It seems reasonable to query a model when constructing an attack since the threat model allows this, but the information should be provided as it is needed to properly interpret the results of the paper. I don't think anything long and detailed is necessary if the process was somewhat ad-hoc, but some indication is necessary I think.
As a case in point, the paper interprets the results on GPT-3.5 (Table 3) as "GPT-3.5 not having the capability to understand complex inputs" (p. 8 line 336). However, I suspect that a more likely explanation of why the top 4 attacks are all attacks from jailbreakchat.com is that they specifically targeted at GPT-3.5 (since that version of ChatGPT is free-to-use and hence likely much more popular).
I also disagreed slightly with some of the claims in the section "What Scaling Won't Solve" (p. 8 line 350). The authors argue that scaling will not solve the problem of competing objectives, since the objectives will still compete. However, larger models may be able to obtain Pareto improvements on both objectives. Hence even though the competition between the objectives may remain, the models may still perform as well as necessary on the safety objective. Nevertheless, I did agree with the argument that scaling alone will not solve the problem of mismatched generalization (not counting using the models to generate stronger attacks, which is discussed later).
Minor point: in the phrase "targeted training is insufficient" (p. 8 line 325), the term "targeted training" is not defined. I initially interpreted it to mean "adversarial training" (which the results do not demonstrate is necessarily insufficient), whereas I think the authors mean "adversarial training that only targets a subset of the possible failure modes" (which it is unsurprising is insufficient, though I suppose it might still be worth noting).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: No questions other than those given above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: No limitations other than those given above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review of our paper! Responding to the points you raise:
1. **On which models were queried.** Attacks were developed by querying GPT-4 and Claude on a subset of the 32 curated prompts, in line with the threat model described in Section 2.1. In addition, attacks from jailbreakchat.com were likely developed by querying GPT-3.5 and GPT-4, but not Claude, given the state of access at the time. The attacks for the 317 synthetic prompts were the same as those on the curated prompts, so they were not adaptive to the prompts (but they were implicitly adaptive to the model, as they transferred from testing on the curated prompts). Please let us know if this addresses your question–we will add this information to the paper in the final version.
2. **On GPT-3.5 attacks.** To clarify about GPT-3.5 on complex inputs, we observe that GPT-3.5 is unable to respond even to a harmless prompt for many of the more complex attacks—see Table 7 in Appendix D. (We will add this more specific pointer in the final version.) Nonetheless, we agree that the top jailbreakchat.com attacks were likely targeted so that GPT-3.5 could respond to them.
3. **On targeted training.** We used "targeted training" to refer to training aimed specifically at certain attacks. Specifically, it appears that Anthropic identified roleplay as a failure mode in their red teaming paper [1] and performed roleplay-specific training for Claude [2]. Our evaluation with Claude highlights that this approach did not address the underlying failure modes, and thus Claude remained vulnerable to other forms of attack.
[1] https://arxiv.org/abs/2209.07858
[2] See the documentation for “claude-v1.2” at https://web.archive.org/web/20230519130926/https://console.anthropic.com/docs/api/reference
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal. This addresses my questions about these 3 points. | Summary: The paper investigates two reasons why jailbreak attacks against SOTA LLMs (GPT-4, GPT-3.5 Turbo, Claude v1.3) succeed despite extensive safety training: a) competing training objectives (safety objectives vs. pretraining/instruction tuning) and failure of safety training to generalize to conditions covered by pretraining/instruction training but not safety training. These two modes are then used as guiding principles to design new jailbreak attacks that are empirically evaluated against the above three models, starting from both known and newly synthesized harmful prompts; a large number of these attacks are shown to have a high success rate. The authors provide concluding hypotheses regarding future work to defend against attacks (more sophisticated safety models that match the models' basic capabilities).
Strengths: The paper aims to examine jailbreak attacks in a more principled way and presents an empirical evaluation that is broader in scope than previous studies. Though the insights are not entirely surprising, the paper does a good job explaining and exemplifying the two failure modes, evaluating SOTA models along those axes, and providing quantitative results. Although the failure modes are intuitively clear and have been known informally by most developers of LLMs, it is valuable to see a concise formulation of the problems and a fairly thorough experimental study.The paper offers high-level suggestions for how to do red-teaming or safety training more effectively and as such would be valuable for the community.
The presentation is very clear; authors observe responsible disclosure practices and have discussed potential ethical considerations (recipes for creating jailbreak attacks).
Weaknesses: 1. The paper focuses on two failure modes only. Arguably these could be the most important ones, but the bigger question is whether one could come up with a more comprehensive taxonomy of failure modes. The paper doesn't discuss this in more detail.
2. Many of the observations and conclusions are by necessity tentative and unconfirmed, due to black-box only access to models. The authors could have verified their hypotheses more directly by using a smaller open-source model where more details about training conditions and data resources are available. This setup could also have been used to explore the space of failure modes more systematically (e.g., what role does decoding play?).
3. The most successful attacks were combinations, but those were not studied in detail from a conceptual point of view- this seems to merit an entirely separate discussion: since there's a combinatorial space of combined attacks, how could models be trained successfully to defend against these?
4. The 'safety-capability' parity point remains vague - what does this mean for the actual safety model, are there concrete mechanisms you could suggest to make models more 'sophisticated'?
It would be good to see more discussion of at least points #1 and #3 in the paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: It's not clear to me where the attacks from jailbreakchat.com (p. 7) land in Table 1 -- which ones does this map to?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors discuss the limitations (e.g., no access to models themselves, hence tentative conclusions) but in the light of these they could have opted to also include some targeted experiments with open-source models (relegated to future work in the paper).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and comprehensive review! To respond to the weaknesses and questions that you bring up:
1. **On the focus on two failure modes.** We view simplicity here as a strength rather than a weakness, as we identify two core issues that underlie almost all of the diverse set of known attacks. This is akin to how OWASP maintains a simplified list of the top vulnerabilities in web security [1]. We agree with you that a broader taxonomy could be envisioned—as in traditional security settings, there can certainly exist a long tail of exceptions. (E.g., OpenAI models can also be attacked via the system prompt / OpenAI's Chat Markup Language.)
2. **On black-box only access.** We highlighted white-box studies involving open-source models as a future direction in the Conclusion because no safety-trained open-source large language models existed at the time of writing, so it was not possible for us to investigate them. (The first such model, Llama 2 from Meta [2], was released after the end of the NeurIPS review period.) And even with open model weights, there still exists a significant gap between proprietary and open-source datasets/infrastructure for safety training. We agree that further investigation with white-box access is an exciting future direction.
3. **On combinations of attacks.** We believe that training should address underlying failure modes rather than specific attacks. Addressing these fundamental failure modes would likely render combinations of attacks ineffective. As an example, the approach of Perez et al. [3] to incorporate human preferences during pretraining could plausibly address both failure modes.
4. **On safety-capability parity.** By safety-capability parity, we mean that models performing safeguarding should be comparable (or stronger) in capabilities to the model being safeguarded. For instance, we envision language models as a necessary component of the safety pipelines of future systems, which contrasts with more primitive techniques like word filters.
5. **On jailbreakchat.com attacks.** The attacks from jailbreakchat.com are italicized in Table 1. Specifically, they are labeled as AIM, evil_system_prompt, dev_mode_v2, dev_mode_with_rant, and evil_confidant.
We hope these clarifications address your concerns—we are also happy to engage in further discussions to enhance the paper. Thank you again for the insightful comments!
[1] https://owasp.org/www-project-top-ten/
[2] https://arxiv.org/abs/2307.09288
[3] https://arxiv.org/abs/2302.08582
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. It addresses my comments. | null | null |
On Masked Pre-training and the Marginal Likelihood | Accept (poster) | Summary: The authors set out to show that the good generalization of Masked Pre-Training can be explained as the equivalence with model's high marginal likelihood. The exchangeability assumption in sequential inputs is handled via a combinatorial choice over the subset of masking features, such that the masked pre-training objectives is a marginalization over uniform distribution of different maskings. The rest of the paper is a discussion on the implications of such a formulation.
Strengths: The paper is reasonably written, and the authors have made a decent effort to convey the significance of the work.
Weaknesses: I am not very convinced by the arguments in the paper that claim that maximizing the marginal likelihood is the reason behind generalization of MPT.
In Line 182, the authors claim an "insight" that MPT targets a biased estimate of LML by showing that fixing the masking rate to 20% of tokens, MPT becomes biased and converges to a different value. I feel like this argument could be made for any two objectives, and something remains missing in this justification. From the reverse-engineered theory, it is already obvious that MPT introduces bias and positioning this practical result as an insight seems to not add any new information. I'd appreciate a clarification here.
In Line 212, the authors claim that MPT performs implicit integration. However, this claim also seems a little unfounded. When averaging over the % of masked tokens in Figure 4, if bulk of the integrated mass is constant, then an alternate way to approximate the integral would be to just take the constant mass (since the integral contains averaging terms as well). Doesn't this result confound the conclusion and just a matter of coincidence?
In any case, marginal likelihood just answers a question very different from generalization - marginal likelihood tells us the likelihood of the data under the "prior" model, i.e. how well the priors explain the data. It does not have anything to do with what happens to the model after training, in principle. There could be confounders to the "insights" claimed by the authors here, especially with large models, since the choice of number of tokens, number of maskings, are far-off from what model large language models use.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Perhaps I am misunderstanding, but shouldn't Eq. (2) be conditioned on all feature variables $1:D$ except for the one being predicted? Eq. (3) seems to present is correctly, since it clearly distinguishes masking indices.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes. See my comments on weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive feedback and the time taken to review our paper. We have addressed all your comments individually below. If any concerns remain, we would be also happy to clarify.
**Point 1** & **Point 2**
> *I am not very convinced by the arguments in the paper that claim that maximizing the marginal likelihood is the reason behind generalization of MPT.*
> *In any case, marginal likelihood just answers a question very different from generalization - (...)*
We would like to remark that this is not the exact message that we want to send in our paper. Precisely, we are interested in the link between MPT and the log-marginal likelihood. Thus, the main discovery that we show is that log-marginal likelihood is equivalent to the cumulative sum of MPT losses shaped with different sizes of the mask. The final conclusion of our work is that MPT implicitly performs stochastic maximisation of the model's marginal likelihood. That is what we prove in a rigorous manner. Additionally, these points are confirmed with the empirical results provided on different tractable and intractable models.
The fact that MPT implicitly maximises the model's marginal likelihood is important. Indeed, the marginal likelihood (or evidence) has been longly considered as the measure of generalization ability in Bayesian models. As we are sure the reviewer knows, LML is usually the desired loss that one would like to obtain in Bayesian modelling, with a large list of examples where Laplace approximations for Bayesian NNs recently excell. However, this one is usually difficult to compute due to the prohibitive computational cost or intractability in the integrals.
Hence, having this connection is perhaps not the final answer to explain why MPT works so well in the recent advances, but we believe that the empirical evidence is a step forward to explain at least one of the causes why pre-trained models generalize so well.
**Point 3**
> *In Line 182, the authors claim an "insight" that MPT targets a biased estimate of LML by showing that fixing the masking rate to 20% of tokens, (...).*
Fair point. So far, before the section that begins in L175, we have proven and provided empirical results which indicate that the cumulative MPT losses with a masking rate from 1% to 100% equals the LML of the model. However, we know that this is not what MPT does in practice, where the masking rate is usually fixed.
We were particularly interested in the understanding the effect of fixing the masking rate. Perhaps, the main surprise is not having a biased estimate of the LML probabilities, but that the biased is **fixed** with respect to the learnable parameters. This is not a common thing, and having biased estimation is a usually a huge problem. However, in the results that we provide this bias is not negatively affecting the maximisation of the LML as it is indicated in Figures 2,3 and 5.
On the other hand, we were interested in understanding the meaning of setting a fixed masking rate of 20, 30, 40 or even 50%. For that reason, we included additional empirical results in Figure 4, where the $x$-axis is the % of masked tokens. Notice that the area under the curve is the LML, which is maximised as the MPT losses decrease.
**Point 4**
> *In Line 212, the authors claim that MPT performs implicit integration. However, this claim also seems a little unfounded. (...)*
We understand the concern here, but we do not think that the claim on *implicit integration* is *unfounded*. We remark that the **expectation integral** plays a key role in the main equation in Proposition 1. Moreover, as the reviewer correctly points out, we focus on the area under the curve described by the MPT losses with respect to the % of masking rate. These curves can be seen in Figures 4 and 5.
The area under the curve is not necessarily constant, and fixing the masking rate basically sets an approximation based on the constant mass. Depending on which decisions are taken, the integration of the area is performed way with one approximation or another. But importantly, none of this is done on purpose, but *implicitly* when we average losses in MPT. We do not think that all of this is just a **coincidence**, as theory goes first and empirical results later. In that sense, we encourage the reviewer to check the left-hand plot in Figure 5, where the area under the curve converges to the true value of the model's LML.
**Point 5**
> *Perhaps I am misunderstanding, but shouldn't Eq. (2) be conditioned on all feature variables $1:D$ except for the one being predicted? (...)*
In particular, Equation (2) uses rules of probability (i.e. chain rule) to make the factorisation of $\log p_{\theta}(\boldsymbol{x})$. Notice that the object $\boldsymbol{x}$ included all the observed variables $x_1, x_2, x_3, \dots, x_D$. In this way, the density $\log p_{\theta}(\boldsymbol{x})$ works as a joint distribution $\log p_{\theta}(x_1, x_2, \dots, x_D)$. If we apply one time the chain rule of conditional probability to this joint distribution, we easily get the following factorisation
$$\log p_{\theta}(x_1, x_2, \dots, x_D) = \log p_{\theta}(x_1| x_2, \dots, x_D) +\log p_{\theta}(x_2, \dots, x_D).$$
Doing it again on the right hand side log-density $\log p_{\theta}(x_2, \dots, x_D)$ gives us again an extra conditional term to the sum, such that
$$\log p_{\theta}(x_1, x_2, \dots, x_D) = \log p_{\theta}(x_1| x_2, \dots, x_D) +\log p_{\theta}(x_2 | x_3, \dots, x_D) + \log p_{\theta}(x_3, \dots, x_D).$$
If we recursively apply this property, we obtain the sum expressed in Eq. (2), which does not requires to have distributions conditioned on all the feature variables $1:D$. We hope that this clarifies a bit this point, as it is a key property in the rest of derivations used for the proof.
---
Rebuttal Comment 1.1:
Comment: > Indeed, the marginal likelihood (or evidence) has been longly considered as the measure of generalization ability in Bayesian models.
I think this statement has much more nuance to it than how it is often directly inherited from early works of David MacKay. In many cases, it comes out often objectively wrong too, e.g. see [1].
> The fact that MPT implicitly maximises the model's marginal likelihood is important.
All details aside, I think this would be the key disagreement I have, and the main hesitation to leaning towards an accept. The connection between MPT and optimizing LML is claimed to be "important". The paper, however, spends a lot of time making and supporting the connection, but not on why this connection is "important". And the fact that other supporting literature that relies on LML has not quite made enough of a mark, makes me less confident about the importance of such a connection, if at all it is practically relevant.
Of course, I empathize with the point that this is only the first step, and there are serious computational challenges.
[1]: https://proceedings.mlr.press/v162/lotfi22a.html
---
Thank you for all the other clarifications as well. I'll keep my score to reflect my confidence in the connection's ability to influence practical applications, given that you have enough support from other reviewers. :-)
---
Reply to Comment 1.1.1:
Title: Response to additional comments
Comment: Dear reviewer, thanks for the time spent to read our rebuttal and the additional feedback provided. We would like to add some comments to your main points.
> *I think this statement has much more nuance to it than how it is often directly inherited from early works of David MacKay. In many cases, it comes out often objectively wrong too, e.g. see Lotfi et al. ICML 2022*
1. We appreciate the reference attached, which is one recent paper that argues in the context of Bayesian NNs that LML might be misaligned with generalisation (in some cases). However, this paper is one against a whole thread of previous works that provide empirical results on the opposite direction, e.g. from all the inherited works since David MacKay, including foundational probabilistic approaches from Z. Ghahramani until the main principles behind the success of Gaussian processes.
We must say that the skepticism around LML and its role in Bayesian NNs is a hot-topic in the community right now, which mainly comes from the difficulties faced in the last years, for instance, the posterior cold effect, the proper priors for weights and prohibitive computational costs.
2. In general we think that saying that LML hasn't had a practical value so far, thereby disregarding decades of work in the Bayesian community is perhaps not a good idea.
Nonetheless, the reviewer is right in the sense that LML has not yet made a practical difference in deep learning, but it does not mean much as nobody currently knows how to even evaluate the LML with very-large NNs. That is, complete relevant experiments in that direction have not been yet conducted, unfortunately.
> *And the fact that other supporting literature that relies on LML has not quite made enough of a mark, makes me less confident about the importance of such a connection, if at all it is practically relevant.*
3. Additionally, the argument that evaluating the LML is not practically relevant is a *chicken-and-egg* type of problem: the Bayesian hypothesis regarding LML cannot be (yet) properly tested without tools to evaluate the LML, and tools for evaluating the LML cannot be move forward because the LML may not be valuable.
> *All details aside, I think this would be the key disagreement I have, and the main hesitation to leaning towards an accept.*
> *I'll keep my score to reflect my confidence in the connection's ability to influence practical applications, given that you have enough support from other reviewers. :-)*
We understand the skepticism around the previous points on the LML, but in our opinion, such debate is perhaps out of the scope of our work and on a different direction. We remark that we were interested in building a link between the LML and masked pre-training. Thus, we hoped it to be a first step towards understanding and a first door open to all the Bayesian literature to make contributions for the immensely large effort needed in the new MPT techniques and novel models as LLMs. But not to make all the new advances to be led by LML or the Bayesian paradigm, that's a different matter.
Finally, we hope that you could re-consider the hesitation to leaning towards an acceptance, and in such case, if some concerns remain we are of course happy to answer. :)
```
S. Lotfi, P. Izmailov, G. Benton, M. Goldblum and A. G. Wilson, Bayesian Model Selection, the Marginal Likelihood, and Generalization, ICML 2020
``` | Summary: This paper derives the equivalence between log marginal likelihood and negative masked pre-training loss which is often used for training text models. The paper argues that as marginal likelihood is the Bayesian way to do model selection, doing training using the masked pre-training loss inherits good generalization properties.
Experiments are done on a probabilistic PCA model, VAE and BERT. In PPCA, where the ground truth is known and the equivalence is confirmed empirically. On this model, it is empirically confirmed that even when we optimize a biased estimation of the log marginal likelihood, different training runs still converge to the same log marginal likelihood, with a slight offset to the true value.
In VAE, we don't know the ground truth, but verify that optimizing using the masked pretraining loss is similar to optimizing with ELBO, in terms of the log marginal likelihood value that the model converges to.
In the BERT model, we simply observe that the shape of the S(x, M) term is similar for PPCA and BERT during different points of training.
Strengths: - This is an important connection to make and the authors discuss relevance for LLMs and Bayesian learning in the discussion.
- The experiments study the realistic case in which we can't enumerate over all masks and verify that masked pre-training is still similar to log marginal likelihood training
Weaknesses: - The derivations are likely correct but I couldn't establish this 100%, see questions below.
- Isn't the Transformer just trained with the autoregressive log likelihood (and not with masked pre-training)? And isn't ViT simply trained to predict object class -- where is the masking?
- The jump to Proposition 1 was big for me and the preceding intuition building parts didn't help
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In equation (1), do we not condition on $\mathbf x_{\mathcal M \setminus \{\mathcal M(t)\}}$ -- how is the marginalization over this variable done in language models?
- Is there a reason why we condition on $\mathbf x_{t + 1:D}$ instead of $\mathbf x_{1:t}$?
- In equation before equation 4, we assume $M < D$, so the value $\mathbf x_D$ which is part of $\mathbf x_{\mathcal M(t+ 1:D)}$ in the left hand side doesn't exist
- In line 109, shouldn't it be ${D \choose t}$ instead of ${D \choose t - 1}$ since there are $t$ tokens we condition on?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive consideration of our work, and the useful feedback including the technical questions on the derivations. We are interested in addressing all the questions brought in the review for a full understanding of the reviewer. If there is still something unclear, we would be happy to provide extra responses.
**Point 1**
> *Isn't the Transformer just trained with the autoregressive log likelihood (and not with masked pre-training)? And isn't ViT simply trained to predict object class -- where is the masking?*
Fair point. As the reviewer indicates, Transformers and ViT architectures themselves don't use masked pre-training (MPT). But we do not mention Transformer or ViT architectures as the models considered in our analysis of MPT. Neither on the related work section. However, BERT and MAE models do use Transformers and ViT as the backbone method respectively while, at the same time, they use MPT to generate the self-supervisory signals. Precisely, this pre-training methodology is the point were we focus our theoretical analysis and connection with LML.
Regarding ViT and vision models, we have now included new empirical results with the ViT-MAE model. These are shown in the PDF attached in the global response to all reviewers.
**Point 2**
> *In equation (1), do we not condition on $x_{\mathcal{M}/\mathcal{M}(t)}$ -- how is the marginalisation over this variable done in language models?*
Assuming you refer to the masked tokens $x_{\mathcal{M}(t)}$, we do not condition on them as they are the ones we want to predict. These ones are indeed the self-supervisory signals that makes the model to learn a holistic structure of the data. Notice that we are recurrently conditioning on the rest of tokens $x_{\mathcal{R}}$. Importantly, the observed data is never marginalised and the log-marginal likelihood (LML) builds a probability metric over all the tokens.
**Point 3**
> *Is there a reason why we condition on $x_{t+1:D}$ instead of $x_{t+1}$?*
If we understand correctly, the reviewer is referring to Equation (2), were we apply properties of conditional probabilities to factorise the $\log p_{\theta}(x)$ distribution or LML. We highlight that in the object $x$ we have $D$ variables $x_1,x_2,\dots, x_D$. Thus, the goal of the sum is to obtain a recursive summation of uni-variate conditional distributions, for instance, $\log p_{\theta}(x_1|x_2,x_3, \dots ,x_D) + \log p_{\theta}(x_2|x_3, \dots ,x_D) + \dots$ That's why we use $x_{t+1:D}$ instead of $x_{t+1}$ in the conditioning variables.
**Point 4**
> *In equation before equation 4, we assume $M < D$, so the value $x_D$ which is part of $x_{\mathcal{M}(t+1:D)}$ in the left hand side doesn't exist*
Right, maybe this was not clear enough. The equation indicated by the reviewer says that we have multiple choices for the order of conditioning objects in the right hand side of log probabilities. Importantly, the letter $\mathcal{M}$ indicates the indices, such that $\mathcal{M} = \{1,2,\dots,D\}$ as is described in L94.
The key point in this equation is that we are fixing the index $t$, where $1<t<D$ such that we can analyse with all the mentioned combinations for $x^{(\pi)}_{\mathcal{M}(t+1:D)}$ Thus, the tokens included in the list of indices $\mathcal{M}(t+1:D)$ depend on the choice $\pi$ and there is not really a problem with the variable $x_D$.
We highlight that the variable $M$ (size of mask) is different from the variable $\mathcal{M}$ (indices of tokens), which could be the cause of a confusion here. We hope that this clarifies this point and we will do our best to update the section on this regard.
**Point 5**
> *In line 109, shouldn't it be $\binom{D}{t}$ instead of $\binom{D}{t-1}$ since there are tokens we condition on?*
Fair point. This critical point is perhaps a bit counterintuitive. We will update it in a way that is clearer. The key idea in L109 is that, fixing an index $t$, we have $D-t+1$ tokens to predict probabilities on (left hand side variables $x_{\mathcal{M}(t)}$ in the conditional distribution), and D-choose-(t-1) distinct orders for the conditioning tokens $x^{(\pi)}_{\mathcal{M}(t+1:D)}$ (*unmasked*). That is, we represent the latter choices with the binomial coefficient $\mathcal{C_t} = \binom{C}{t-1}$. Extra details on these derivations are also included in the Appendix A of the paper, where all elements are placed together for a better understanding. | Summary: This paper shows that MLM is effectively maximizing the model's marginal likelihood, perhaps explaining why MLM has been successful. Beyond providing a proof, they run several empirical experiments suggesting their results hold in practice.
Strengths: 1. Clear introduction; Clear highlighting of paper strengths.
2. Clear background on MLM.
3. They provide a proof in the appendix and a number of empirical tests that support their findings in practice.
4. Their discussion showing connections both to model design in AI as well as relevance to improving Bayesian Models seems well put together.
I don't deeply understand the proof and math, so am I unable to fully judge the strength of the contribution. For what I do appreciate, I would recommend the paper to be accepted. I found the paper to be interesting, overall well-presented, and noteworthy.
Weaknesses: 1. Providing more hand-holding between the various subsections of section 3 would help the reader understand your goals for each section. Right now I can see the specific finding, but: Stressing the greater point that you're trying to verify your proof in practice (?) in each of these subsections and where and how you find support would be great.
2. Providing slightly more hand-holding on what marginal likelihood is and what it would be capturing in the context of, say, BERT, would make the paper stronger.
Nits
1. Figure 1: The y-axis get smaller (key detail!) but I could not see this initially. The text was very light. Perhaps increasing the font size somehow or making more of a note of it in the caption.
2. Figure 5: y-axis is both small and cut off.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: On line 207 (and beyond) you make some distinction between linear vs non-linear models. Does your proof make assumptions along these lines? If so, can you bring these assumptions further up into the abstract?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: "The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, authors are encouraged to provide a short proof sketch to provide intuition." More of a proof sketch might help the reader understand the footing of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive consideration and the useful feedback on our work. We also appreciate the highlighted strengths and contributions. We addressed the main comments and concerns included in your review. If any other concern remains, we would be also happy to clarify during the discussion period.
**Point 1 & 2**
> *Providing more hand-holding between the various subsections of section 3 would help (...)*
> *Providing slightly more hand-holding on what marginal likelihood is and what it would be capturing in the context of, say, BERT, would make the paper stronger.*
We appreciate the insightful feedback for making the paper *clearer* and *stronger*. At the beginning, our main concern was on the structure of the paper. Mainly, we had to choose between the two views described in the end of **Section 5**. That is, writing the paper to highlight the relevance for Bayesian ML practitioners (who might not be aware of the role that MPT plays), or for the audience who knows MPT very well and would like to use the proposed link for a better understanding. Perhaps, we implicitly chose the first view, whose main consequence are the two points mentioned by the reviewer. We will update it accordingly to your comments if the paper goes on for a final version.
**Point 3**
> *On line 207 (and beyond) you make some distinction between linear vs non-linear models. Does your proof make assumptions along these lines? (...)*
Fair point. The paper is organised in a way where we first focus on the connection between masked pre-training (MPT) and log-marginal likelihood (LML), independently of the type of probabilistic model chosen (e.g. linear as PPCA or non-linear as VAEs). Importantly, the proof we provide **does not make any assumption** along the lines on the linear and non-linear nature of the probabilistic model. This is an important point that we would like to emphasize.
Once we provide the main theoretical results for the connection between MPT and LML, we jumped into the second part, where our interest was providing new empirical results to the previous theory. Since for linear models the calculus of the LML is generally **exact**, we used PPCA as the first probabilistic model for verification. This is inspired by Lucas et al., NeurIPS 2019, who also used PPCA for analysing the behavior of VAEs and particularly, the *posterior collapse* effect.
The sentence on L207 makes reference to the applicability of the theory (i.e. connection between MPT and LML) beyond tractable linear models, that is, non-linear models. We address this point in the following sections with empirical results with VAEs and LLMs (i.e. BERT).
```
J. Lucas, G. Tucker, R. Grosse and M. Norouzi. *Don’t Blame the ELBO! A Linear VAE Perspective on Posterior Collapse*. NeurIPS 2019
```
**Point 4**
> *"The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, authors are encouraged to provide a short proof sketch to provide intuition." More of a proof sketch might help the reader understand the footing of the paper.*
We appreciate your recommendation. Our first idea was to include the main proof in the paper, however, we chose an intermediate solution which can be checked between L103-L130, including the **Proposition 1**. We shaped this as a proof sketch, but considering your feedback, we would be happy to restructure this part and include extra details if the paper moves forward.
---
Rebuttal Comment 1.1:
Comment: Thanks for the helpful response!
I still recommend an accept but with low confidence because I feel I don't quite understand some of the proof nor some of the issues the other reviewers are pointing out.
---
Reply to Comment 1.1.1:
Comment: Thank you for the support. If there's anything you would like to discuss or clarify then we are more than happy to discuss further. | Summary: They show empirically that randomly masking a fixed number of M tokens and predicting with the remaining tokens produces a biased estimate of the log marginal likelihood $\log p(x)$ (LML).
Furthermore, they prove that repeating the masking for M from 1 to D (the total number of tokens), and summing up the estimates, leads to an unbiased estimate of the LML.
Strengths: - The paper is well written and mostly easy to follow.
- Intuitive proofs and arguments are well supported by empirical results for tractable and intractable models.
- The work might give insights/ideas for setting masking rates in pre-training.
Weaknesses: Main Concern:
The main result Proposition 1, requires to condition on different subsets of the remaining the tokens $R$. But this is not what is done in practice for masked pre-training, where normally all tokens in $R$ are used. It would have been interesting to see some discussion/experiments about the effect.
Others:
- It would have been interesting to see some experiments about the ideas stated in line 295.
- Given the works like Fong and Holmes (2020), there is no so much novelty from a theoretical point of view.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could you comment on my main concern above?
- Why does the LML change with the number of epochs?
- What is GT-LML?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: see main concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the acknowledgement of the main contributions of our work, the useful comments and the relevant feedback provided on the technical side. We have addressed _all_ your comments individually in the lines below. If there is still something unclear, we would be also happy to clarify with extra responses.
**Point 1**
> *The main result Proposition 1, requires to condition on different subsets of the remaining the tokens $R$. But this is not what is done in practice for masked pre-training, where normally all tokens in $R$ are used. It would have been interesting to see some discussion/experiments about the effect.*
We understand that from a first sight, the main result provided in the l.h.s. of **Proposition 1** seems to indicate that we are conditioning on different subsets of remaining tokens $\boldsymbol{x}_{\mathcal{R}}$. As the reviewer indicates, this would not fit with MPT, that in practice uses all remaining tokens $\mathcal{R}$.
However, the last expression in **Proposition 1** indicates a slight different thing. Particularly, the variable $x_{\mathcal{R}(1:D-j)}$ does not indicate a selection of subsets but the number of the remaining tokens instead. This is due to the sum in this proof considers different ratios between the masking size $M$ and the rest of tokens $R$. (See the sum which goes from $j=1$ until $M$). In the end, this corresponds to averaging over the number of the masked tokens $M$, where $R$ is always set up to the rest of items accordingly.
Perhaps this point can be easily observed in the expression in the r.h.s. of the previous equation where we condition on all the remaining tokens in $\boldsymbol{x}_{\mathcal{R}}$. This one is also the one that matches Equation (1), building the final link with MPT. We apologise for this point that might lead to a little confusion. We would be happy to update this detail, also keeping the notation uncluttered, if the paper moves forward.
**Point 2**
> *It would have been interesting to see some experiments about the ideas stated in line 295.*
Thanks for pointing this out. We have now included new experiments that perform what is described in L295. In particular, we uniformly sample the number of masked tokens to obtain an unbiased estimate of the LML with the MPT losses. This generates what the theory indicates, the MPT losses are now centered with respect to the LML and they also make it converge to the true LML. Additionally, we also provide two extra Figures where we sample uniformly between the 0--50% and 50--100%. Obviously, as it is not sampling the entire range of the mask size, provides again biased estimates as we could have guessed.
An important detail is that we initially discarded the uniform sampling due to it might become prohibitive on the computational cost. This could happen if the drawn masking rate is, for instance, around the 0%-5%, due to the model needs to recursively use all tokens at each iteration. Of course, this depends on the type of model chosen. But this is just for clarifying all details for the reviewer.
The figures can be accessed in the additional PDF page attached to the general rebuttal response (to all reviewers).
**Point 3**
> *Given the works like Fong and Holmes (2020), there is no so much novelty from a theoretical point of view.*
Perhaps, we have not correctly highlighted some the main theoretical differences with respect to the work of Fong and Holmes (2020) in our paper. Thus, to address the concern of the reviewer, we remark some important details that might help to understand better why the theoretical ideas brought in Fong and Holmes (2020) are significantly different from the ones shown in our submission.
The work of *Fong and Holmes (2020)* introduced a first connection between leave-p-out *cross-validation* (CV) and the LML in such cases where CV uses posterior predictive probabilities as the scoring function. The equivalence is theoretically analysed from a Bayesian statistical perspective, where the main focus is placed on exhaustive leave-p-out CV and the prohibitive computational cost.
Honestly, *Fong and Holmes (2020)* is a relevant reference because it inspired the probabilistic ML community to take a further look for understanding how LML is linked to many probabilistic settings. However, they successfully did it only for cross validation. In this way, we have found, proved and provided empirical results on the equivalence between MPT and LML for different models. This achievement was initially inspired on recent work like Chen et al. ICLR 2023, where Gaussian processes (GPs) are introduced in the architecture of transformers.
```
W. Chen and Y. Li, Calibrating Transformers via Sparse Gaussian Processes. ICLR 2023
```
**Point 4**
> *Why does the LML change with the number of epochs?*
If we guess correctly, the reviewer makes reference to the maximisation of the LML in the experiments and particularly, on the curves drawn in Figure 2. This plot shows three curves: the LML of the true model which generated the data, the cumulative MPT loss under analysis and the LML. The latter corresponds to the log-probability in Equation (6) for the PPCA model. In this experiment, we also maximise this expression of the LML to understand how it behaves compared with the MPT loss. Thus, the LML changes with the number of epochs as we are optimising wrt. the parameters $\boldsymbol{W}$ in L138 and L141. Interestingly, notice that both the LML and the MPT losses converge to the true value of the LML in the left hand side plot.
**Point 5**
> *What is GT-LML?*
Good catch. GT-LML makes reference to *ground truth LML* in the caption of Figure 2, but we used the sentence "true model LML" in the label of the plots for the dashed line. It is a typo that will be corrected in the main paper.
---
Rebuttal 2:
Comment: Thank you very much for the clarifications. I have three final comments:
- I do not think that Proposition (1) is a main contribution, since it can be directly derived from Proposition (2) in Fong and Holmes (2020).
- The same discussion as in "3.2. Sensitivity to the prior and preparatory training" of Fong and Holmes (2020) also applies here:
It might actually not(!) be beneficial to try so sample over all possible mask sizes, since
this would include for example the case where R is empty or contains only one token, which means we basically ignore any context information.
- By the way, I think Equation (1) is wrong:
Equation (1) assumes that the masked tokens are independent conditional on rest R, but this assumption is obviously not valid.
(In contrast, in Fong and Holmes (2020), when replacing x with iid samples y, this is valid.)
---
Rebuttal Comment 2.1:
Title: Response to additional comments from Reviewer mVT4
Comment: We thank the reviewer for the response to our clarifications. We would be happy to remark some important points around the final concerns indicated.
>*I do not think that Proposition (1) is a main contribution, since it can be directly derived from Proposition (2) in Fong and Holmes (2020).*
In the paragraphs L27-32 and L33-37 of our work, we clearly state how both propositions and proofs are connected. Initially, we mention that our derivations rely on a previous result from Fong and Holmes (2020), and we even changed the notation from $R$ to $M$ to make the reader understand better that cross-validation is in this case close to masked pre-training. The text in the paper literally says:
*"Our proof relies on a previous observation from Fong and Holmes (2020), who shows that the log-marginal likelihood equals the average of exhaustive leave-M-out cross-validation (CV) given posterior predictive scores."*
After that sentence in the same paragraph, we additionally explain that our formal results can be seen as the transposed version of Fong and Holmes (2020) results: where cross validation removes *random observations*, masked pre-training removes *random features*. In that way, we would like the reviewer to re-consider the statement *"it can be directly derived from Proposition (2) in Fong and Holmes (2020)"*. How is then possible to derive one result from the other? Particularly if Fong and Holmes (2020) is considering cross-validation and prediction of observations, while we are on masked pre-training and predicting random features of observations.
In this way, we think that we have been completely transparent and trustworthy on the main contributions. Moreover, we have done our best to explain to the reader what the proof relies on and the main differences.
> *The same discussion as in "3.2. Sensitivity to the prior and preparatory training" of Fong and Holmes (2020) also applies here (...)*
Again, we understand the concerns around the connection with *Fong and Holmes (2020)*. We want to emphasize that we are big fans of the work of *Fong and Holmes (2020)*, but note that the link to the present paper is in the form of a *proof technique*. They do not touch upon MPT at any point, so a similar reasoning on the experiments and analysis does not imply it to be "the same".
Moreover, we think as in the previous point, that this sort of statements are not well founded given what is presented in the paper. The interest of *Fong and Holmes (2020)* is very different there, as they discuss the prior and the number of data-points (objects not features) to be considered in cross-validation and not the amount of masked tokens as we do. In our case, we wanted to understand what was going when one fixes the masking rate, which is a very different perspective.
> *By the way, I think Equation (1) is wrong (...) In contrast, in Fong and Holmes (2020), when replacing x with iid samples y, this is valid.)*
We remark that Equation (1) belongs to section **2 Masked pre-training** where we make a short background for the reader. Here, nothing around *Fong and Holmes (2020)* applies yet. Thus, Equation (1) is not wrong, as we clearly explain what is being maximised by MPT. In this case, if the masked tokens $x_{\mathcal{M}}$ are conditionally independent given the rest $x_{\mathcal{R}}$, it makes sense that the conditional distribution factorises according to a sum. It is important to understand here that the Equation is showing how the variables are modelled when building the objective. If the reviewer considers that placing there a $\approx$ symbol in the equality, we would be happy to update it. One last detail for reference is that this sort of notation was previously used in the context of MPT. Particularly in Song et al. NeurIPS (2020) @ Equation (1) and Yang et al. NeurIPS (2019) @ Eq. (3).
```
Z. Yang , Z. Dai, Y. Yang , J. Carbonell, R. Salakhutdinov, Q. V. Le, XLNet: Generalized Autoregressive Pretraining for Language Understanding, NeurIPS 2019
```
```
K. Song , X. Tan , T. Qin , J. Lu and T. Y. Liu, MPNet: Masked and Permuted Pre-training for Language Understanding, NeurIPS 2020
``` | Rebuttal 1:
Rebuttal: **General comment to all reviewers**
We thank all reviewers for their useful comments, positive consideration and relevant feedback on our paper. It seems that the reviews are positive in general and acknowledges our main contributions and soundness of our work. We have addressed each comment and question individually below and we would be glad to engage in discussion in case of more questions or concerns exist. We also provided an extra PDF with the new results from the experiments that some of the reviewers asked for. We will also update the main paper in the future with the main requested changes for improvement, if the submissions moves forward.
Moreover, as some reviewers already did in their comments, we use the acronyms MPT (masked pre-training) and LML (log-marginal likelihood) in our response.
**Note on the additional experiments**
Additionally to the main response in this platform, we provide the results for the suggested experiments in the attached PDF. In particular, the plots included in Fig. 1 and the response **E.1** addresses the **Point 2** of the reviewer **mVT4**. Additionally, the results shown in Fig. 2 and the section **E.2** correspond to the **Point 2** of the reviewer **EYbT**.
**E.1. Experiments on uniform masking rate sampling**
These experiments answer the question around the effect of uniformly sampling the masking ratio with MPT losses. So far, we have observed that the cumulative MPT loss is equivalent to an \*unbiased* estimate of the log-marginal likelihood (LML) when we consider all possible numbers for the amount of masked tokens. To avoid having a *biased* estimate when fixing the masking rate (e.g. to 20\%), one option is to use an uniform distribution. In this way, we sampled the rate of masking at each epoch in the range ($0\%-100\%$). The results in the (A) plot of Fig. 1 indicate that we are able to obtain such unbiased target loss. Importantly, we should notice that the cumulative losses oscillate around the true value of the LML that is also maximised. For completeness of the experiments, we also included the empirical results when the masking rate is sampled in different ranges (i.e. $50\%-100\%$). In this case, we have different biases in the plots (B) and (C) in the Fig. 1. This biases are related to the area under the curves described in Fig. 4 of the main paper.
**E.2. Experiments on vision.**
Additionally to the results included in Fig. 5 of the main paper with the BERT model, we now provide the curves produced in our experiments with the ViT-MAE model. For this study with vision self-supervised learners, we have loaded a pre-trained model of ViT-MAE from a public repository. To draw the curves in Fig. 2, we computed the losses per each rate of masking between 5% and 95% for samples from three different test image datasets. We can observe that the curves described by the pre-trained ViT-MAE model shows a similar behaviour to the one we obtained with BERT. These curves are also aligned with the *canonical* analysis performed with the tractable models. On the hand, the curves described by ViT-MAE with randomly initialised parameters are *flat* and of low self-predictive probability.
Pdf: /pdf/d17ffa68002cbf971cd16a027f87de36576de8f1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes viewing masked pretraining methods (like BERT or MAE) as optimizing a biased form of the log marginal likelihood $\log p_\theta(x)$. Using a parallel to exhaustive leave-M-out cross validation, the authors derive the exact relationship between the LML and a weighted sum of masked pretraining losses across mask sizes. The authors show that maximizing the masked pretraining loss improves the LML, across a wide variety of settings, from probabilistic PCA to VAEs to BERT.
Strengths: - I found Prop. 1, the connection between LML and the weighted sum of masked pretraining losses, to be original and quite enlightening.
- Paper is thorough and provides extensive empirical experiments to confirm the theory.
- Finding seems like a good step towards understanding masked pretraining objectives, which are extremely widespread in both NLP (BERT) and vision (MAE), and potentially understanding how to improve them.
- Paper is well written and clear, especially if the appendix is used to supplement the derivations in Section 3.
Weaknesses: - The biggest issue is that the paper shows that masked pretraining performance improves LML, but does not address why it does better than other training objectives that directly maximize LML (such as an autoregressive model).
- No experiments on the vision side, even though MAE is an extremely popular method that falls neatly into the framework.
- Some parts of the paper are a tiny bit confusing. Surprisingly, I felt that Appendix A and B could be swapped into the main paper and it would significantly improve the clarity of Proposition 1's derivation or the PPCA setting.
- I felt that the second paragraph of the intro (L19-26) didn't sufficiently motivate why the LML is important.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: **Major questions**:
- If maximizing LML is the ultimate reason why MPT is good, then why are other methods of directly maximizing LML (e.g., autoregressive models) worse?
- Can you run the same BERT experiments with pretrained MAE models as well, to show applicability to vision models?
**Minor questions and suggestions**:
- Where did this "log-marginal likelihood" name come from? It looks just like the $\log p_\theta(x)$ that all generative models maximize, so I'm confused about the naming.
- L19-26: This paragraph doesn't really convince me that the marginal likelihood is that important.
- L27-28: "masked pre-training optimizes according to a stochastic gradient of the log-marginal likelihood" -- I think this sentence should be removed, since the following sentence states the contribution more clearly.
- L33: "who shows" -> "who show"
- L74: "which might also include likelihood" -- what is likelihood?
- L86: "previous sum" -- is this referring to Eq. 2?
- L87-92: I understand what it's trying to say after reading the entire paper, but this paragraph is very confusing. What do exchangeable and invariant mean?
- The block of text between L108-109: "to be aligned" -- this is also confusing. It's explained much better in the appendix!
- Generally, Appendix A explains this proof much better than the main text.
- L128: "we usually have a biased estimator" -- wouldn't a Monte Carlo estimator, with masks drawn from the right distribution, be unbiased?
- L157: "just one random mask per training epoch" -- my understanding is that masks are different for each sample in each epoch. I'm reading this as using the same random mask for all samples in a given epoch.
- L165: "notice that P=1 is usually set up in MPT in practice" -- same issue as above. What exactly is being claimed about the number of masks?
- L170: "this bias is known" -- known in what sense? The numerical value of the bias, the symbolic form of the bias, or just the fact that the MPT objective is not equal to the LML objective?
- L183: "the exact LML is iteratively maximized at each epoch" -- I wouldn't use the word "maximize" here, as it seems like it increases but is not guaranteed to even reach a local minimum unless there are further assumptions on the structure of the model.
- Same issue with L204-205
- Fig. 2 caption: "The rate of masking is unfixed and it varies from 1% to 100%" -- I may be misunderstanding here, but with 10 tokens, shouldn't the masking rate start at 10%?
- L194-195: probabilities imply integrating out the latent variables, often given the posterior distribution. In most cases, performing this integration is extremely difficult or not possible in training time." -- why do there need to be latent variables at all? Latent variables are one way to induce a structured joint distribution over all dimensions of x, but this doesn't have to be used. In fact, latent variables aren't used in the proof of Prop 1.
- L202: "as well as we compare" -> "as well as compare"
- L209-210: "we used iterative encoding-decoding to produce the self-conditional probabilities for masked tokens — see Sec. F in Rezende et al. (2014)" -- it would be good to explain this procedure here. I glanced at Sec F and it seems like it only gives a way to sample from the marginal over masked tokens, not a way to compute the probability? I may be misunderstanding, which is why it would be nice to have a clear explanation in this paragraph.
- L212-214 on implicit integration: these lines are confusing and would benefit from being rewritten.
- Fig. 3 caption: "Darker lines indicate the evolution of the LML" -- the figure legend does not match this. Also, i don't understand why the lower right plot is separate from the lower left plot, or why the value is so different.
- Fig. 4: "looses" -> "loses." Also, I don't understand what the black lines in Fig. 4 (right) is supposed to denote.
- L218-219: "This explains why the probabilities have an approximately exponential decay" -- I don't see how anything explains this.
- L224: "effect to" -> "affect"
- L225: "a longer discussion is provided in the supplementary material" -- Appendix C.2 doesn't seem very relevant to this section.
- Table 2 caption: "area under the MPT curve" -- is this weighted by the number of masks or not? Wouldn't the weighted sum based on Prop. 1 directly measure LML here?
- Fig. 5: y axis label is cut off on the far left plot.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: A major limitation that isn't addressed by the authors is this: why is MPT the right way to improve LML, instead of another method that directly maximizes LML? Empirically, masked pretraining methods like BERT/MAE are much better representation learners than alternate approaches that optimize the LML. The authors mention "autoregressive modeling" in their limitations section, but I don't think enough weight is put on it as a large question that this paper on MPT<-> LML introduces yet doesn't answer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive consideration and the useful feedback on our work. We are particularly glad for the scores marked as *excellent* and for hearing that the connection between LML and the weighted sum of masked pretraining losses is considered *original* and *enlightening*. Overall, we appreciate the time you clearly took to review our paper, including the key questions and the rest of minor details attached. We've done our best to fit in this rebuttal clear answers to the principal points of your review. Additionally, we have performed extra experiments on the vision side with MAE to address one of the reviewer's main concerns.
**Point 1A** & **1B**
> *If maximizing LML is the ultimate reason why MPT is good, then why are other methods of directly maximizing LML (e.g., autoregressive models) worse?*
> *why is MPT the right way to improve LML, instead of another method that directly maximizes LML? (..)*
Our ambition is not to say that maximising the LML is the **ultimate** reason why MPT works so well. Perhaps we need to revise our communication if a final update of the paper is allowed.
Instead, we are strictly providing a positive answer to the question: *Is MPT related to the maximisation of LML?* The complete answer included in the paper is that MPT optimises according to a stochastic gradient of the LML. As the latter is a well-known measure of generalization ability in Bayesian models, we believe that the connection provides a new tool for analysing MPT. However, we do not claim it to be the unique reason, as MPT is a learning algorithm that might be combined with different models.
In the particular case of LLMs, we should not ignore the role that the chosen architecture for the model plays. One additional example that supports this point is the work of (Neyshabur, ICLR 2019), which says the following: *Our capacity bound correlates with the behavior of test error with increasing network sizes (...), and could partly explain the improvement in generalization with over-parametrization*. Even in this example, the explanation is assumed to be *partial*, and our perspective is similar in this way.
With respect to the *direct maximisation of LML*, we cannot claim that it is generally *worse* than MPT. However, we have observed that MPT maximises according to a stochastic approximation to the training objective. From a practical perspective, this gives advantages with respect to the direct maximisation of the objective, where the lower computational cost excels and the stochastic optimization might be beneficial to avoid local minima.
**Point 2**
> *Can you run the same BERT experiments with pretrained MAE models as well, (..)*
Yes. In particular, we reproduced the same empirical results obtained in **Section 3.2** with BERT, but for the MAE approach instead. We considered the exact model included in He et al., CVPR 2022. The results are shown in the additional PDF attached to the global response.
```
K. He, X. Chen, S. Xie, Y. Li, P. Dollar and R. Girshick, Masked Autoencoders Are Scalable Vision Learners, CVPR 2022
```
**Point 3**
> *The authors mention "autoregressive modeling" (..) but I don't think enough weight is put on it (..).*
Fair point. We understand the interest around *autoregressive modelling* and its connection with MPT. However, we do think that this topic slightly falls out of the scope of our work, as our main goal was to build the link between LML and MPT in a thorough manner.
Importantly, autoregressive modelling is mentioned in the discussion section to point out that there is still room for understanding the link with LML beyond the simplest version of MPT. This is not really a limitation, but rather potential future work. Our early hypothesis is that the theory proposed could accept conditioning between the tokens to-be-predicted. This could make the pre-training algorithm less of an approximation to the true LML. However, further analysis and theoretical development should be needed to prove this crucial point.
**Point 4.1**
> *Where did this "log-marginal likelihood" name come from? (...).*
The *marginal likelihood* name has been longly used in the Bayesian ML for more than 25 years, at least. In his seminal works around 1998 with Bayesian NNs and Laplace approximations, David JC MacKay already mentions marginal likelihood as a method for model comparison also providing even earlier references. A common synonym for marginal likelihood is *evidence*, which also gives name to the acronym ELBO (Evidence Lower BOund) widely used in variational inference and VAEs, for instance. This probability is indeed the one being bounded by the ELBO.
```
D. JC MacKay. Choice of Basis for Laplace Approximations. Machine Learning, volume 33, pp. 77–86, Springer, 1998
```
**Additional comment:** We would've liked to include the rest of answers to the questions in the minor comments and suggestions (those marked as point 4.1, 4.2, etc..). Unfortunately, we did not have enough space in this rebuttal box. As soon as the discussion phase will begin, we will include the mentioned answers in an additional comment for the reviewer.
---
Rebuttal Comment 1.1:
Title: Response to additional questions (Part 1/2)
Comment: As we mentioned in the main rebuttal, we include the answers to the additional questions included in the section of minor points of the Reviewer **EYbT**. We hope that it helps to address all remaining concerns and we thank again for the time taken to review our work.
**Point 6.2**
> *L87-92: (...). What do exchangeable and invariant mean?*
Here the words **not exchangeable** make reference to the fact that we cannot exchange different conditional probabilities freely, for instance, $p(x_1|x_2,x_3)\neq p(x_2|x_1,x_3)$. This might seem trivial for the reader, but certain statistical models use *exchangeability* as a key property between observations. Thus, we wanted to highlight that this is not the case in our work. On the other hand, **invariant** makes reference to the sum of conditional factors, whose result is always the same and equal to the value of the LML.
**Point 6.3**
> *L128: "we usually have a biased estimator" -- wouldn't a Monte Carlo estimator, with masks drawn from the right distribution, be unbiased?*
Yes. To avoid having an unbiased estimator of the LML, using Monte Carlo sampling would be a good idea. However, as we know that the size of masking is usually fixed in practice, we focused on understanding what was going on in that case.
As this question was also raised by the reviewer **mVT4**, we performed additional experiments to empirically check that this is indeed true. One important detail to mention is that we initially discarded this strategy due to it might become prohibitive on the computational cost. If we consider uniform sampling of mask sizes, and the resulting rates are around 0%-5%, the computational cost of considering all tokens might be unfeasible in some types of models.
The experimental results are also included in the additional PDF attached to the global response to all reviewers.
**Point 6.4**
> *L157: "just one random mask per training epoch" -- my understanding is that masks are different for each sample in each epoch. (...).*
> *L165: "notice that P=1 is (...) What exactly is being claimed about the number of masks?*
Good catch. Perhaps this sentence is not clear enough without the proper context. What we would like to say with *just one random mask per training epoch* comes from the following:
**Proposition 1** tells us that we are equivalent to the LML, if we average over **all** the different masking patterns. In practice, this mean that we would have to sum over the total number of choices $\mathcal{C}_M$, which is unfeasible as the number prohibitively increases with the number of tokens per observation. In that sense, we assumed that the expectation in Proposition 1 is approximated with less masking patterns (e.g. one mask). We analysed this effect in the experiments shown in Figure 1, where we vary with the *number of random masks* ($x$-axis).
The main consequence of this analysis is that averaging in **Proposition 1** with only *one* random mask is a useful approximation in this scenario. That's why we mention that.
**Point 6.5**
> *L170: "this bias is known" -- known in what sense? (...)*
The *bias* of the estimate is given by the difference between the cumulative log-probabilities for all sizes of the mask and the log-probabilities obtained only with one fixed mask size. Additionally, this bias is constant, which is an important property for training the model. This bias can be observed in the distance between the shaded curves and the dark line ones in Figure 2. Additionally, it could be computed if needed.
**Point 6.6**
> *Fig. 2 caption: "The rate of masking is unfixed and it varies from 1% to 100%" -- I may be misunderstanding here, but with 10 tokens, shouldn't the masking rate start at 10%?*
Good catch. It looks like a typo. We will correct it in the updated version.
---
Rebuttal Comment 1.2:
Title: Response to additional questions (Part 2/2)
Comment: **Point 6.7**
> *L194-195: probabilities imply integrating out the latent variables, (...)." -- why do there need to be latent variables at all? Latent variables are one way to induce a structured joint distribution over all dimensions of x, but this doesn't have to be used. In fact, latent variables aren't used in the proof of Prop 1.*
Fair point. We used latent variables as a way to link the theoretical results with the tractable PPCA model. In this case, posterior predictive probabilities are obtained via integration of such latent space. We were also interested in connecting our analysis with previous works as Lucas et al. NeurIPS 2019. We agree that the proof holds without considering latent variables explicitly as it uses directly log-posterior predictive probabilities. In some way, it has been our choice for the probabilistic model in this work, but others would also be possible.
```
J. Lucas, G. Tucker, R. Grosse and M. Norouzi. *Don’t Blame the ELBO! A Linear VAE Perspective on Posterior Collapse*. NeurIPS 2019
```
**Point 6.8**
> *L209-210: "we used iterative encoding-decoding to produce the self-conditional probabilities for masked tokens — see Sec. F in Rezende et al. (2014)" -- it would be good to explain this procedure here.(...)
Fair point. Obtaining conditional distributions for VAEs is not a trivial task. Some works in the recent literature have addresses this problem, but for MPT we were interested in the simplest way to do it at every training step.
In his seminal work, Rezende et al. 2014 (Appendix F) propose a way to do missing imputation (e.g. non-observed pixels) using the VAE model. We reproduced the following steps described in their appendix:
*"(i) initializing the non-observed pixels with random values; (ii) sampling from the recognition distribution given the resulting image; (iii) reconstruct the image given the sample from the recognition model; (iv) iterate the procedure."*
We did the same in our experiments with our own data. This process converges to the desired conditional distribution (named conditional as it is conditioned on the observed entries $\mathbf{v}_o$). In the Appendix it is clearly stated, and named as marginal, but corresponds to our predictive distributions used for MPT. They indeed give the following sentence:
*"conclude that given the above properties, (...) is guaranteed to generate samples from the correct marginal $p(\mathbf{v}_m|\mathbf{v}_o)$."*
Moreover, thanks for point this out, we will add extra details for improving the clarity on the reproducibility of these experiments.
```
D. J. Rezende, S. Mohamed and D. Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. ICML 2014
```
**Point 6.9**
> *Fig. 3 caption: "Darker lines indicate the evolution of the LML" -- the figure legend does not match this. (...).
The reviewer is right, this should be corrected. Thanks for pointing it out, we will update the plots if the paper moves forward.
**Point 6.10**
> *Also, I don't understand what the black lines in Fig. 4 (right) is supposed to denote.*
Right. We did not use the necessary amount of text here to explain these lines. Simply, we wanted to highlight the impact of choosing 15% or 85% rate in the masking procedure. When doing this, we are indeed approximating the area under the curve with a constant mass (the areas under the line). Interestingly, in some cases this area is larger or smaller, depending on which is the rate of masking chosen.
**Point 6.11**
> *"This explains why the probabilities have an approximately exponential decay" -- I don't see how anything explains this.*
In the analysis of the results shown in Figure 4, we were making reference to the (approximate) exponential decay of the curves as we increase the rate of masking to >90%. In practice, this means that the model does not have enough number of tokens to accurately predict the masked ones. That's why curves describe the low predictive probabilities. | null | null | null | null | null | null |
Learning from Both Structural and Textual Knowledge for Inductive Knowledge Graph Completion | Accept (poster) | Summary: This paper proposes an inductive knowledge graph completion method based on both textual and structural information. It introduces a new task of how to leverage information from both modalities when there are both structured and textual data available. This paper proposes a rule mining method called LSTK-TELM, where LSTK represents a rule mining framework. Three datasets are constructed based on the relation extraction dataset for evaluation. Experimental results demonstrate the effectiveness of the proposed framework, with LSTK-TELM outperforming other baseline methods.
Strengths: S1. A novel knowledge graph completion scenario that combines both textual and RDF facts.
S2. This paper proposes a framework called LSTK to address this problem and introduces the LSTK-TELM method.
S3. Three datasets are constructed for evaluation. Experimental results demonstrate the effectiveness of the proposed method.
S4. Well written.
Weaknesses: W1. There is a lack of discussion on related works in the field of inductive link prediction. See references below:
[1] Inductive Relation Prediction by Subgraph Reasoning
[2] Topology-aware correlations between relations for inductive link prediction in knowledge graphs
[3] Communicative Message Passing for Inductive Relation Reasoning
[4] Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs
[5] Disconnected Emerging Knowledge Graph Oriented Inductive Link Prediction
W2. The use of the symbol "t" for both tail entities and rule lengths may lead to confusion.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. Rules can also be used for transductive knowledge graph completion. Why does this paper specifically mention focusing on inductive KGC? There doesn't seem to be any specific design for the inductive setting.
2. Please discuss the limitations of this paper. For example, the application scenario of the proposed task requires lots of text data related to the KG.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the "Weaknesses" and "Questions" sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. The following addresses concerns and questions.
> [R1] There is a lack of discussion on related works in the field of inductive link prediction.
[A] Due to space limitations, we have only discussed the related work on learning rule-based systems for inductive link prediction. In reality, three main categories of methods have been proposed to address inductive link prediction, including GNN-based methods, PLM-based methods and rule-based methods. GNN-based methods such as R-GCN and other SOTA methods[1-5] address inductive link prediction based on graph neural networks (GNNs). PLM-based methods leverage pre-trained language models (PLMs) such as BERT to address the inductive link prediction problem. However, both GNN-based and PLM-based methods can be considered as embedding-based methods due to their black-box nature. For instance, the hidden representation learned by GNNs and PLMs can hardly be interpreted by human. Therefore, we focus on learning rule-based systems for inductive link prediction with textual knowledge, where logical rules (e.g., TE-rules) can serve as explanations for predictions. We naturally expect that the proposed framework LSTK is highly interpretable to human. Thus we have not incorporated GNN-based and PLM-based methods into LSTK for inductive KGC with textual knowledge. We will include these discussions in the revised version.
> [R2] The use of the symbol "t" for both tail entities and rule lengths may lead to confusion.
[A] Thank you for pointing out it. We will refine this notation in the revised version.
> [R3] Rules can also be used for transductive knowledge graph completion. Why does this paper specifically mention focusing on inductive KGC? There doesn't seem to be any specific design for the inductive setting.
[A] Inductive KGC is usually considered to be more practical and challenging in real-world scenarios. To guarantee the challenge and practicality of the proposed settings and datasets, we focus on inductive KGC in this work.
> [R4] Please discuss the limitations of this paper. For example, the application scenario of the proposed task requires lots of text data related to the KG.
[A] We have done this discussion in our supplement file (please see section E: Discussion on Limitations in the supplement file). We have discussed that the application scenario of our proposed framework requires KGs with corresponding text corpora. Considering that corresponding text corpora may be missing in most real-world KGs, we generate soft triples based on the distant supervision assumption that any sentence mentioning two entities might involve a relation between these entities. This assumption enables us to fetch texts via information retrieval tools such as search engines, without the need for manually filtering texts highly aligned with KGs. It is worth noting that both the orginal datasets of HacRED, DocRED and BioRel are collected using distant supervision. Considering that the process of fetching texts may introduce massive noises to the soft triples, we propose a novel formalism for logical rules (i.e. the TE-rules) to mitigate the negative impact of noises from soft triples.
---
Rebuttal Comment 1.1:
Comment: The author's response has addressed most of my concerns. There is still one point to notice: the authors emphasize that GNN- and PLM-based link prediction is not human-interpretable and use this as a reason for not incorporating GNNs and PLMs. While interpretability is an advantage of rule-based methods, it is not a necessary requirement for inductive link prediction. Perhaps the authors should discuss them in light of the specificity of the proposed task, rather than interpretability.
---
Reply to Comment 1.1.1:
Title: Thanks for your helpful feedback!
Comment: Thank you again for taking the time to review our paper and providing valuable feedback. We will definitely improve our paper to clarify the relationships between related work and our work. We agree that discussing our work with GNN-based and PLM-based method in light of the specificity of inductive link prediction is necessary, and we will include these discussions in the revised version. Specifically, in the response to reviewer Vj83, we have outlined the reasons why PLM-based methods are not suitable to be applied in our inductive settings. Regarding GNN-based methods, rather than prioritizing interpretability, we will compare them to our work from two other perspectives.
On one hand, GNN-based methods [1-5] necessitate subgraph extraction to handle test triples involving unseen entities. The process of subgraph extraction becomes highly time-consuming when dealing with a large number of soft triplets (e.g., it takes about 10 hours for GraIL [1] to process 1M triples for subgraph extraction). This limitation impairs their practicality in our inductive scenario.
On the other hand, our link prediction setting comprises two types of triples, namely the hard triples and soft triples. In LSTK, we employ the confidence scores of soft triples to capture the uncertainty of potential facts derived from supporting texts. However, these confidence scores cannot be directly incorporated in GNN-based methods, as they require newly designed message passing function (aggregate function) to capture such features.
Thank you for the helpful discussion for improving our work again! Please feel free to provide additional feedback and we will try our best to improve our work! | Summary: This paper proposes a two-stage framework that imposes both structural and textual knowledge to conduct knowledge graph completion. The first stage aims to extract some soft triples with confidence scores. And then the second stage designs some tailored rules for entity link prediction, including text-enhanced rules and TE-rules. Experiments on three benchmarks show the proposed method significantly outperforms previous work.
Strengths: 1. The idea of this paper is clear.
2. Experimental results show significant improvements to baseline models.
Weaknesses: 1. The text-enhanced entity representation models have been widely studied, such as [1][2]. Related work should be discussed.
2. The two-step pipeline will bring unnecessary cascaded errors. The most directed way is enhancing the entity representations using text representations to directly predict the entity links.
3. The first step with confidence score seems to model uncertainty in linking entities, which may be a novelty. However, such a motivation is not clearly described.
4. The relationship between the designed step should also be emphasized. Such as, the first step brings more entity semantics from text and the second step verifies the entity relations via text and structure aspects.
5. LLMs have shown strong ability in building KGs and exacting entity relations. This paper does not conduct experiments on LLMs.
[1] Representation Learning of Knowledge Graphs with Entity Descriptions. AAAI 2016.
[2] Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval. ACL 2018.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The motivations of the designed two step is unclear.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. The following addresses concerns and questions.
> [R1] The text-enhanced entity representation models have been widely studied, such as [1][2]. Related work should be discussed
[A] Due to space limitations, we have not discussed the text-enhanced entity representation models such as [1-2]. These methods follow a different research line compared to our method. The methods proposed in [1-2] focus on enhancing entity representation by text such as the descriptions on entities, relying on aligned text descriptions on entities as input. In contrast, we study the KGC setting where a part of the knowledge (i.e. real-world facts) exists in a structured KG, while the other part exists in text corpus. Thus no descriptions on entities are given in our setting. We require to first extract textual knowledge (i.e., a set of soft triples) from the given text corpus and then apply these soft triples to learn rule-based systems for inductive KGC. We will include these discussions in the revised version.
> [R2] The two-step pipeline will bring unnecessary cascaded errors. The most directed way is enhancing the entity representations using text representations to directly predict the entity links.
[A] The mentioned way is infeasible in the scenario considered in this paper. On one hand, we study learning rule-based systems from both structural and textual knowledge, where there is no entity representation in a rule-based system. We have not studied embedding-based methods in LSTK because they are black-box models with low interpretability, and can hardly generalize to the inductive setting. We naturally expect that the proposed framework LSTK is highly interpretable to human. On the other hand, the text-enhanced entity representation models assume that the manual annotated descriptions on entities are explicitly given, whereas the inputs of our setting are a KG and its corresponding text corpus. Thus there are no text descriptions on entities for enhancing entity representations. In our setting, we assume that a part of the knowledge (i.e. real-world facts) exists in structured KG, while the other part exists in text corpus. Therefore, a two-step pipeline is necessary in LSTK to first extract potential facts (i.e. the soft triples) from the given text corpus, and then apply these soft triples to learn rule-based systems.
> [R3] The first step with confidence score seems to model uncertainty in linking entities, which may be a novelty. However, such a motivation is not clearly described.
[A] The goal of the first stage in LSTK is not to link entities but to mine a set of potential facts (i.e. the soft triples) for learning rule-based systems. In this work, we assume that some knowledge exists in the given KG and some exists in the given text corpus. The motivation of the first stage in LSTK is to discover textual knowledge in the text corpus by mining soft triples. This process is not for linking entities, but for relation extraction from every entity-pair that is mentioned by texts.
> [R4] The relationship between the designed step should also be emphasized. Such as, the first step brings more entity semantics from text and the second step verifies the entity relations via text and structure aspects.
[A] The goal of the first step is not to brings more entity semantics from text but to mine potential knowledge (i.e. real-world facts) from the text corpus. The goal of the second step is to learn a rule-based system for addressing inductive KBC, using both existing hard triples and extracted soft triples. We have describe the relationship between these two step in introduction (line 44-52) and methodology (line 132-134).
> [R5] LLMs have shown strong ability in building KGs and exacting entity relations. This paper does not conduct experiments on LLMs.
[A] To the best of our knowledge, no published work has shown that LLMs such as ChatGPT and GPT-4 excel at addressing inductive KGC. At the present stage, LLMs are unsuitable for adoption in our scenario. The reasons are two-fold. First, applying LLMs in the first stage of LSTK is heavy in cost, as the first stage requires repeatedly applying the relationship extraction model. For instance, more than 100 million soft triples are extracted from the BioRel dataset (please see Table 1), which means that we require extensive invocation of LLMs. However, the number of requests for LLMs is still limited to this day. Second, it is infeasible to apply LLMs such as ChatGPT and GPT-4 in the second stage for inductive KGC because we cannot directly fine-tune LLMs for our task due to the resource limitations and closed source limitations. One solution may be applying LLMs using few-shot learning or in-context learning, but this solution may also suffer from the problems of low reasoning quality and length limitation when handling long context (e.g., the context of the entire KG is required). This is also out of our research field because LSTK focuses on providing faithful explanations for inductive KGC, based on a rule-based system learned from both structural and textual knowledge.
> [R6] The motivations of the designed two step is unclear.
[A] In our settings, we assume that a part of the knowledge exists in a structured KG, while the other part exists in text corpus. To mine textual knowledge (i.e., potential facts) from the given text corpus, we first employ a textual entailment model to generate soft triples. In the second stage, the generated soft triples are used to learn a rule-based system for inductive KGC. Considering that the number of mined soft triples is large, we cannot optimize these two-step in an end-to-end manner, as the end-to-end manner requires to store the gradients of all soft triples, resulting in out-of-memory (OOM) issues. Thus, we design a two-step framework in LSTK.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanation. My concerns have been addressed. Is there any possible way to conduct some signals from the rule based systems to optimize the KG-sentence entailment method, such as RL? The rule based systems seems a expert that contains lots of logical rule knowledge.
---
Reply to Comment 1.1.1:
Title: Thanks for your helpful feedback!
Comment: Thank you for taking the time to review our paper and providing helpful and valuable feedback. We will definitely improve our paper based on your comments. Furthermore, your additional feedback regarding the integration of signals from rule-based systems to enhance the KG-sentence entailment method with optimizing tactics such as RL is quite interesting and meaningful. Actually, we have been committed to examining such proposals, including optimization tactics like RL and gradient grafting [1]. The primary challenge we are facing is the issue of training efficiency, as there are more than 100 million soft triples that need to be used for enhancement in some datasets. This may require some well-designed heuristic sampling strategies to reduce the search space. Considering that the proposed two-step framework (LSTK) is efficient and effective enough (e.g., all MRR scores exceed 0.5 by LSTK-TELM) to handle soft triples, we leave this investigation to future work. We have also mentioned this in our discussion on future work in the conclusion, and we will include more details in the revised version. Thank you for the helpful discussion for improving our work again. Please feel free to provide additional feedback and we will try our best to improve our work further!
[1] Wang Z, Zhang W, Liu N, Wang J. Scalable Rule-Based Representation Learning for Interpretable Classification. In NeurIPS 2021: 30479-30491.
---
Rebuttal Comment 1.2:
Comment: I should cite the KEPLER and BERTRL, which represent entity representations in w1. I rethink it and maybe direct triple is enough, because it is modeled as a sentence entailment task and the entity description is long and may brings additional noise. But as shown in other review, such a point can be discussed.
---
Reply to Comment 1.2.1:
Title: Thanks for your further feedback!
Comment: Thanks for your further valuable comments. We agree that the reason why we did not introduce entity descriptions as input in our settings should be further discussed. As you mentioned, the entity description is long and may bring additional noise. This should be one of the reasons. Apart from this, we consider two more reasons. Firstly, having an aligned entity description for each entity may impair the practicality, because the description may be missing for some entities in real-world scenarios, especially for certain vertical domain KGs like medical KGs. Secondly, there may be multiple descriptions (expressing different meaning) for some entities, which requires an effective mechanism to incorporate such signals into logical rules. Otherwise, it might introduce more noise into the reasoning process. We will include these discussions in the revised version. Thanks again! | Summary: In this paper, the authors try to solve Knowledge Graph Completion (KGC) by proposing a two-stage framework Learning from Structural and Textual Knowledge (LSTK), that imposes both structural and textual knowledge to learn rule-based systems. In the first stage, the authors compute a set of triples with confidence scores. To mitigate the influence of the noise in the set of triples estimated in the first stage, the authors also propose a threshold-based method text enhanced rules (TE-rules) to filter out low-confident rules. In the second stage, the authors use the filtered triples by TE-rules to train a neural model named TE-rule Learning Model (TELM) for KGC. Furthermore, the authors created new datasets based on the data, HacRED, DocRED, and BioRel. Experimental results show that the proposed method LSTK outperforms the baseline methods regarding MRR, Hits@k (k=1,3,10).
Strengths: - Since the proposed method LSTK relies on rule-based inference, the estimation of this method is interpretable to humans.
- The proposed approach can restrict the automatically extracted rules to high-confidence ones by TE-rules to deal with the noisy rules extracted by distant supervision.
- The threshold of filtering soft triples does not require sensitive tuning and works with 0.5. Thus, LSTK is robust and easy to use.
- This work provides new KG datasets with their corresponding texts.
Weaknesses: - The paper does not refer to the recent KGC model that can conduct KGC with unseen entities [1].
- Even though recent KGC models utilize pretrained language models considering textual information of triplets in KGs [2], there is no comparison against such models.
[1] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9, 176-194. (https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00360/98089/KEPLER-A-Unified-Model-for-Knowledge-Embedding-and)
[2] Lv, X., Lin, Y., Cao, Y., Hou, L., Li, J., Liu, Z., ... & Zhou, J. (2022, May). Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach. In Findings of the Association for Computational Linguistics: ACL 2022 (pp. 3570-3581). (https://aclanthology.org/2022.findings-acl.282/)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As shown in the example of Figure 1, the proposed method LSTK can cover inferences of unseen entities and relations. However, as shown in the inductive setting of Wikidata5M [1], recent prompt-based KGC models can complete such triples based on text information like descriptions and pretrained knowledge similar to your work. Thus, to claim the advantage of LSTK, you need to compare it to the recent KGC models like KEPLER [1] and SimKGC [3]. Also, could you explain the difference between these models and LSTK?
[1] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A unified model for knowledge embedding and pre-trained language representation. Transactions of the Association for Computational Linguistics, 9, 176-194. (https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00360/98089/KEPLER-A-Unified-Model-for-Knowledge-Embedding-and)
[3] Wang, L., Zhao, W., Wei, Z., & Liu, J. (2022, May). SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 4281-4294). (https://aclanthology.org/2022.acl-long.295/)
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I think you need to add that this approach is difficult to work on knowledge graphs that do not have corresponding text data to Limitations in your paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. The following addresses concerns and questions.
> [R1] The paper does not refer to the recent KGC model that can conduct KGC with unseen entities [1].
[A] Due to space limitations, we have only discussed the related work about learning rule-based systems for inductive KGC. Up to now, three main categories of methods have been proposed to address KGC, including GNN-based methods, PLM-based methods and rule-based methods. GNN-based methods such as R-GCN address inductive link prediction based on graph neural networks (GNNs). PLM-based methods such as [1-3] leverage pre-trained language models (PLMs) such as BERT to address the inductive KGC problem. However, both GNN-based and PLM-based methods can be considered as embedding-based methods due to their black-box nature. For example, the hidden representation learned by GNNs and PLMs can hardly be interpreted by human. Therefore, we focus on learning rule-based systems for inductive KGC with textual knowledge, where logical rules (e.g., TE-rules) can serve as explanations for predictions. We will include these discussions in the revised version.
> [R2] Even though recent KGC models utilize pretrained language models considering textual information of triplets in KGs [2], there is no comparison against such models.
[A] The approach like [2] follows a different research line than LSTK, considering distinct textual knowledge from that of LSTK. The textual knowledge considered in the work [2] encompasses the contexts of triples, descriptions on entities and the pre-trained knowledge. In contrast, we study the KGC setting where a part of the knowledge exists in the KG, while the other part (i.e. potential facts) exists in text corpus. In this setting, the input consists solely of the provided KG and its corresponding text corpus, without any accompanying entity descriptions. The textual knowledge considered in our work pertains to potential facts entailed by the text corpus. Consequently, we mine soft triples from the text corpus in the first stage. We focus on learning rule-based system because logical rules excel in explaining why a missing fact is inferred. While PLMs are capable of handling inductive KGC, they remain black-box models with limited interpretability, and cannot induce logical rules as explanations for predictions.
It is worth noting that a possible way to apply KEPLER [1], PKGC[2] and SimKGC [3] to our scenario is first pre-training the PLMs such as BERT on the given corpus, and then utilizing the pre-trained PLM to address inductive KGC. Although this way does not require to process soft triples, it still fails to generalize in the real-world application scenarios. To analyze the reasons, we first recall the formalization of our problem setting.
$\textbf{Problem statement}:$ Given a set of triples and a corpus for training $U\_{train}=(\mathcal{G}\_{train}, \mathcal{T}\_{train})$, a set of triples and a corpus for test $U_{test}=(\mathcal{G}\_{test}, \mathcal{T}\_{test})$, our inductive KGC setting aims to learn a KGC system based on $U_{train}$, and then evaluate the learnt system on $U_{test}$. During evaluation, for each head query $(?,r,t)$ or tail query $(h,r,?)$, the learnt system finds an answer with the highest estimated truth degree for answering $(?,r,t)$ or $(h,r,?)$, based on the background knowledge from $U_{train} \cup U_{test} \setminus \\{(\\{(h,r,t)\\}, \emptyset)\\}$.
It can be seen in our inductive setting that the KGC systems require to process unseen text corpus in the test set. It is worth noting that this setting is motivated by the real-world application scenarios that we might need to fetch texts from search engines to find evidences for a new fact. PLM-based methods cannot work in this scenario because they can only obtain textual knowledge by pre-training the model on the given corpus. This process is impractical in the inference phase because the process of pre-training is time-consuming and requires massive computing resources. In contrast, the proposed LSTK-TELM can easily address the above scenario.
Besides, we have tried to apply methods like KEPLER [1] to the second stage of LSTK but found it to be time-consuming. This is due to the extraction of over 100 million soft triples in certain datasets (e.g. the BioRel dataset), which leads to time-consuming processes for PLM-based methods to handle such a large volume of triples due to their extensive parameter sizes.
We will include these discussions in the revised version.
> [R3] Prompt-based KGC models can complete such triples based on text information similar to your work. Could you explain the difference between these models and LSTK?
[A] The differences between the mentioned methods [1-3] and LSTK are two-fold. Firstly, the text knowledge considered in the work [1-3] is different from that in LSTK, where KEPLER [1], PKGC [2] and SimKGC [3] consider text information such as contexts of triples, descriptions aligned to entities (e.g., Wikidata5M) and pre-trained textual knowledge. In contrast, LSTK is proposed to address the KGC scenario where a part of the knowledge (i.e. real-world facts) exists in structured KG, while the other part exists in text corpus. The input of this setting is the given KG and its corresponding corpus. Thus no description is given to a entity or a fact. The textual knowledge considered in LSTK is a set of potential facts (i.e., the soft triples). Secondly, PLM-based methods such as [1-3] can also be seen as embedding-based methods, which are black-box models with low interpretability. In contrast, the proposed LSTK framework is able to induce logical rules (i.e. the TE-rules) as explanations for predictions.
> [R4] I think you need to add that this approach is difficult to work on knowledge graphs that do not have corresponding text data to Limitations in your paper.
[A] We have done this discussion in our supplement file (Please see section E: Discussion on Limitations).
---
Rebuttal Comment 1.1:
Title: I appreciate your detailed explanations.
Comment: The authors' response has cleared the position of their work that attempts to accomplish interpretable KGC. Considering the possibility of revising the paper based on the response's content, I will increase my recommendation score.
---
Reply to Comment 1.1.1:
Title: Thanks for your helpful feedback!
Comment: Thank you for taking the time to review our paper and providing valuable comments. We will definitely improve our paper based on your comments. Please feel free to provide additional feedback and we will try our best to improve our work! | Summary: This paper proposes a rule-based inductive KGC method with two stages. In the first stage, it extracts soft triples from a text corpus using distant supervision. In the second stage, the obtained soft triples are mixed with the original hard triples and used to learn rule-based model for KGC.
Strengths: (1) It is interesting and intuitive to enrich the original triples in KG by extract new triples from text corpus.
(2) The idea that uses extracted soft triples from text corpus to enhance logical rules is novel to some extent.
Weaknesses: (1) This paper uses both structural and textual knowledge for inductive KGC, but there is no discussion about other related works which also use structural and textual knowledge, such as KEPLER[1] and BERTRL[2].
(2) Since the proposed method extract new triples from text corpus, if the unseen entities in test set may seen in the new soft triples. It is better to provide statistics about the unseen entities in original triples, soft triples and both triples.
(3) Some notations are confusing, for example the "t" is used for a tail entity in a triple (h,r,t), and also used in "1≤t≤T".
[1] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Transactions of the Association for Computational Linguistics, 9, 176–194.
[2] [2] Zha, H., Chen, Z., & Yan, X. (2022). Inductive Relation Prediction by BERT. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5923-5931.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (1) This paper uses both structural and textual knowledge for inductive KGC, but there is no discussion about other related works which also use structural and textual knowledge, such as KEPLER[1] and BERTRL[2].
(2) Since the proposed method extract new triples from text corpus, if the unseen entities in test set may seen in the new soft triples. It is better to provide statistics about the unseen entities in original triples, soft triples and both triples.
(3) Some notations are confusing, for example the "t" is used for a tail entity in a triple (h,r,t), and also used in "1≤t≤T".
[1] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Transactions of the Association for Computational Linguistics, 9, 176–194.
[2] [2] Zha, H., Chen, Z., & Yan, X. (2022). Inductive Relation Prediction by BERT. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5923-5931.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: (1) This paper uses both structural and textual knowledge for inductive KGC, but there is no discussion about other related works which also use structural and textual knowledge, such as KEPLER[1] and BERTRL[2].
(2) Since the proposed method extract new triples from text corpus, if the unseen entities in test set may seen in the new soft triples. It is better to provide statistics about the unseen entities in original triples, soft triples and both triples.
(3) Some notations are confusing, for example the "t" is used for a tail entity in a triple (h,r,t), and also used in "1≤t≤T".
[1] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Transactions of the Association for Computational Linguistics, 9, 176–194.
[2] [2] Zha, H., Chen, Z., & Yan, X. (2022). Inductive Relation Prediction by BERT. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5923-5931.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. The following addresses concerns and questions.
> [R1] This paper uses both structural and textual knowledge for inductive KGC, but there is no discussion about other related works which also use structural and textual knowledge, such as KEPLER[1] and BERTRL[2].
Due to space limitations, we have only discussed the related work about learning rule-based systems for inductive KGC. We have not discussed the methods like KEPLER[1] and BERTRL[2] because they follow a different research line than our method. On one hand, these methods consider different textual knowledge from that of LSTK, where the textual knowledge considered in [1-2] encompasses the contexts of triples, descriptions on entities and the pre-trained knowledge. The purpose of introducing texts for these methods is to enrich the semantic information of entities, thereby obtaining improved entity representations. In contrast, we study the KGC setting where a part of the knowledge exists in the KG, while the other part (i.e. potential facts) exists in text corpus. In this setting, the input consists solely of the provided KG and its corresponding text corpus, without any accompanying entity descriptions. The textual knowledge considered in our work refers to the potential facts entailed by the text corpus. This setting is motivated by the scenario that the existing facts in structural KG are highly incomplete, and we can fetch text evidences from text corpus to support a new fact. Therefore, we mine a set soft triples from the text corpus to obtain textual knowledge in the first stage.
On the other hand, PLM-based methods such as KEPLER[1] and BERTRL[2] can be considered as embedding-based methods, which are black-box models with low interpretability, and cannot induce logical rules as explanations for predictions. In reality, we naturally expect that the proposed framework LSTK is highly interpretable to human. Thus, we focus on learning rule-based systems for inductive KGC because logical rules excel in explaining why a missing fact is inferred. We will include these discussions in the revised version.
> [R2] Since the proposed method extract new triples from text corpus, if the unseen entities in test set may seen in the new soft triples. It is better to provide statistics about the unseen entities in original triples, soft triples and both triples.
[A] Thanks for pointing out the statistic difference. We provide our statistical details as follow. The statistical results show that about 98.7%/83.7%/79.4% triples in the test set of HacRED/DocRED/BioRel involve unseen entities from the training triples, about 31.6%/84.9%/62.1% triples in the test set of HacRED/DocRED/BioRel involve unseen entities from the extracted soft triples, and about 30.0%/67.8%/46.8% triples in the test set of HacRED/DocRED/BioRel involve unseen entities from both the training triples and extracted soft triples. This shows that there are still a great number of triples in the test set involving unseen entities, even when the soft triples are used. Furthermore, we have done experiments to show whether the new extracted soft triples can cover the triples in test set (Please see the baseline Textual Copy Rule (TCR) and the variant model (2) in Table 2). We will include these statistics in the revised version.
> [R3] Some notations are confusing, for example the "t" is used for a tail entity in a triple (h,r,t), and also used in "1≤t≤T".
[A] Thank you for pointing out it. We will refine this notation in the revised version.
---
Rebuttal Comment 1.1:
Comment: The authors' response has cleared some issues, I will increase my recommendation score.
---
Reply to Comment 1.1.1:
Title: Thanks for your helpful feedback!
Comment: Thank you for taking the time to review our paper and providing valuable comments. We will definitely improve our paper based on your comments. Please feel free to provide additional feedback and we will try our best to improve our work. Thank you for the helpful discussion for improving our work again! | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of knowledge graph completion by proposing a two-stage framework — learning from structural and textual knowledge (LSTK). This novel framework leverages both structural and textual knowledge to learn rule-based systems, which provides a unique approach in the realm of KGC research. The paper is premised on the idea that the typical reliance on structural knowledge alone in rule-based systems is a limiting factor. This leads to the proposition of a system that utilizes "soft triples", or triples with confidence scores derived from textual corpora via distant supervision and a textual entailment model with multi-instance learning. In the second stage, a rule-based system for KGC is learned using both these soft triples and the hard triples of existing KG triples. The paper also introduces a novel formalism for learning rules, referred to as text-enhanced rules or TE-rules, which can mitigate the negative impact of noisy data in soft triples.
Strengths: — The two-stage framework introduces an innovative way of leveraging both structural and textual knowledge for KGC, thereby potentially mitigating the limitations of existing methods.
— The introduction of TE-rules provides a robust mechanism for learning rules, to mitigate the negative impact of noises from soft triples.
— The paper introduces three new datasets for empirical evaluation, contributing further to the body of resources available in this field.
Weaknesses: — The authors utilize a single method to estimate the confidence scores of the soft triples, resulting in a potential lack of generalizability.
— While the introduction of three new datasets is commendable, the paper does not offer a clear comparison of these datasets with established benchmarks.
— Furthermore, it remains uncertain how well the proposed method would perform on existing benchmarks.
— The paper falls short in discussing the scalability of the proposed method. It is unclear how the proposed framework would handle large-scale knowledge graphs, particularly considering the potential computational costs of generating and handling soft triples.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: — Can you elaborate why only use a textual entailment model based on the pre-trained language model BERT to estimate the confidence scores of the soft triples and discuss its generalizability? And what exact BERT model variant do authors use?
— How do the newly introduced datasets compare with established benchmarks? Moreover, how would the proposed LSTK framework perform on these existing benchmarks?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful comments. The following addresses concerns and questions.
> [R1] The authors utilize a single method to estimate the confidence scores of the soft triples, resulting in a potential lack of generalizability.
[A] We have conducted an ablation study on replacing the proposed method for extracting soft triples (please see variant model (6) in Table 2). This variant model employs a strong baseline [1] in the field of distantly supervised relation extraction. Results show that the proposed method outperforms the variant in three datasets. This demonstrates the generalizability of the textual entailment model.
> [R2] While the introduction of three new datasets is commendable, the paper does not offer a clear comparison of these datasets with established benchmarks.
[A] As far as we know, there are no established benchmarks with corresponding text corpora that meet our settings. In our settings, we assume that a part of the knowledge (i.e., real-world facts) exists in structured KG, while the other part (i.e., potential facts) exists in text corpus. Consequently, we introduce three new datasets to evaluate the effectiveness of LSTK and facilitate the research field of addressing KGC from both structural and textual knowledge perspectives. The most relevant dataset for our study is the Wikidata5M [2] dataset, which provides aligned text descriptions on entities. While this dataset can be utilized to assess approaches focused on enhancing entity embeddings for KGC, it is not suitable to be used in our scenario, as our scenario assumes that a set of potential facts is implied by the corpus, and we require to mine these potential facts from the given corpus. Furthermore, our scenario presents a greater challenge since the relationship between entities and the provided text corpus is unknown, whereas Wikidata5M provides aligned descriptions for each entities. We will include these discussions in the revised version.
> [R3] Furthermore, it remains uncertain how well the proposed method would perform on existing benchmarks.
[A] As mentioned in our discussion on limitations (Please see section E: Discussion on Limitations in the supplement file), our proposed framework, including the proposed TE-rules and TELM model, cannot effectively function for KGs without text corpora. For example, the introduced textual relations in TE-rules exist exclusively within KGs with soft triples. Therefore, we have not conducted an evaluation of LSTK-TELM on established benchmarks such as WN18RR, FB15k-237 and YAGO3-10.
> [R4] The paper falls short in discussing the scalability of the proposed method. It is unclear how the proposed framework would handle large-scale knowledge graphs, particularly considering the potential computational costs of generating and handling soft triples.
[A] As reported in Table 1, there are about 38.8/4.7/123.4 million soft triples has been extracted in the first stage for the HacRED/DocRED/BioRel datasets. The computational cost of generating soft triples is mainly on the textual entailment model based on BERT, where it takes about 6/0.2/18 hours to compute all soft triples for the HacRED/DocRED/BioRel datasets. Further, the proposed LSTK-TELM model is able to handle both the existing triples and soft triples in efficient time, as it stores all the triples by sparse adjacency matrix. In more details, each query in the HacRED/DocRED/BioRel datasets takes about 1.5/0.3/2.5 seconds for evaluation. We will include these discussions in the revised version.
> [R5] Can you elaborate why only use a textual entailment model based on the pre-trained language model BERT to estimate the confidence scores of the soft triples and discuss its generalizability? And what exact BERT model variant do authors use?
[A] We use the BERT-base model for textual entailment. Implementation details about the textual entailment have been reported in section 5.2. Actually, we have discussed the generalizability of the textual entailment model in Table 2. Specifically, we have compared with a strong baseline [1] in the field of distantly supervised relation extraction. (please see variant model (6) in Table 2). Results show that the proposed method outperforms this variant in three datasets. This demonstrates the generalizability of the textual entailment model. This may be due to that a textual entailment model can deal with open relations and exploit more text semantics (e.g., contexts of relations) on relations than a relation extraction model does (please see line 311-315 for more details).
> [R6] How do the newly introduced datasets compare with established benchmarks? Moreover, how would the proposed LSTK framework perform on these existing benchmarks
[A] As far as we know, there are no established benchmark KGs with corresponding text corpora that meet our problem settings, thus we introduce three new datasets for evaluation. The most relevant dataset for our study is the Wikidata5M [2] dataset, which offers aligned text descriptions on entities. Nevertheless, Wikidata5M is not suitable to be used in our scenario because its text corpus mainly encompasses descriptions on entities, whereas our scenario requires extracting a set of potential facts from the given corpus. Therefore, we have not evaluated LSTK on Wikidata5M.
On the other hand, the proposed TE-rules and TELM model are designed to handle soft triples with textual relations. Thus, we have not evaluated LSTK-TELM on existing benchmarks such as WN18RR, FB15k-237 and YAGO3-10. This has also been discussed in the section on limitations (please see section E: Discussion on Limitations in the supplement file).
**Reference:**
[1] Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In ACL, pages 1388–1398, 2019.
[2] KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. In TACL, 9, 176–194, 2021.
---
Rebuttal Comment 1.1:
Title: Thanks for your response!
Comment: I appreciate the author's rebuttal, and most of my concerns have been addressed. So, I am inclined to accept this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for your helpful feedback!
Comment: Thank you for taking the time to review our paper and providing helpful comments. We will definitely improve our paper based on your feedback. Please feel free to provide additional feedback and we will try our best to improve our work. Thanks again! | null | null | null | null | null | null |
DynPoint: Dynamic Neural Point For View Synthesis | Accept (poster) | Summary: DynPoint is an algorithm proposed to enhance the synthesis of novel views in monocular videos using neural radiance fields. It focuses on predicting explicit correspondence between neighboring frames, enabling information aggregation for view synthesis. The demonstrated experimental results show a significant reduction in training time, comparable outcomes to previous approaches, and robustness in handling uncontrolled and lengthy videos without requiring a canonical representation of video content. To achieve it authors propose to improve the estimated scene flow from the optical flow with corrected depth values
Strengths: the proposed technique for scene flow estimation is a solid contribution. It takes into account two inconsistencies in depth and flow that are denoised with the method. The improvements can be applied to any baseline as a separate technique to improve baseline (e.g. NSFF). Moreover, this allows extracting correspondences for the arbitrary 3D scene.
hierarchical scheme to render the semi-explicit structure is a novel and a trustable way to get the fast method based on existing research
Experiments demonstrate the substantial of the proposed method and provides a strong contribution to the field.
Weaknesses: The proposed method relies heavily on existing models and it is unclear whether the quality depends if you replace them with others
Authors don't investigate the quality of the correspondence, which can be interesting by itself. The easiest way is to extend section 4.4 with a comparison versus method like "Deformable Sprites" Ye et al.
The presentation of the visual results can be improved. Figure 3 should have a close-up by emphasizing the region of interest, otherwise, results are not clear as well as others.
Not compared with “DynIBaR: Neural Dynamic Image-Based Rendering”. Yes, I understand that it is a recent work. At least add numbers to the table for Nvidia dataset.
Without video results, it is difficult to reason even with such metric improvements on average.
No limitation or ethical section to give a reader explicit statements.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: 1. How do you measure the time? Does it include phase 1 and the hardware is the same as competitors?
2. “Neural point cloud” technique is similar to [1, 2, 3] representations in some sense (humans & static scene examples). Could you extend related works in the neural point cloud as well? Does hierarchy is an important contribution for the neural point cloud? If so, it can be
3. What is the main drawback of the method for iPhone dataset? Is it possible to extend the comparison on this benchmark with others, since the gap is much smaller than on others?
4. Do you consider including video metrics for the results (e.g. VMAF)?
5. How issues in Figure 5 can be addressed and what is causing them?
[1] PointAvatar: Deformable Point-based Head Avatars from Videos
[2] Self-Improving Multiplane-To-Layer Images for Novel View Synthesis
[3] Pulsar: Efficient Sphere-based Neural Rendering
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Fast changes in the scene as well as thin objects can be a problem for the method, which is quite common for this class of models. Compared with other methods this one needs more information and additional pretraining steps. the category-specific methods can be better for non-rigid parts. The scalability of the method is limited due to two-stage training for each of the scenes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Section 4 - Reviewer BEKz
We thank the reviewer for the constructive assessment of our work. In the following, we address the concerns point by point. Please feel free to use the discussion period if you have any additional questions.
## 4.1 Weakness
**4.1.1 Deformable sprites**
Unlike our initial focus on 3D relationships, "Deformable Sprites" aims to learn a non-rigid 2D pixel-wise transformation using video data, enabling motion separation without supervision. This makes a direct comparison between our learned 3D correspondences and their results challenging. But, both approaches utilize predicted optical flow from RAFT [53] to assist in determining the correspondence between two frames. However, the concept of using video to understand unsupervised motion for generating new views is attractive, which enables specific rules for different motion groups.
**4.1.2 Region of interest**
Certainly, highlighting the key area will improve reader understanding. We'll include this in the final version.
**4.1.3 DynIBaR**
DynIBaR is indeed a relevant study to ours. We'll add their metrics in the final version and we also provide a comparison in the Reviewer Rm9i (Sec. 3.1.1).
**4.1.4 Video results**
We're grateful for your insightful observation. Following the instructions for the rebuttal process, we've added an anonymous link to the video demo we've made for the HyperNeRF dataset. You can find this link in our response to the AC under the "Official Comment" box at the top of the review page. More demos could be found in the PDF document to AC.
**4.1.5 Limitation or ethical section**
Our method's primary limitation lies in its generalizability. Addressing this issue would significantly enhance the feasibility of DynPoint. Further details can be found in the Reviewer J3AP (Sec. 1.2.2).
## 4.2 Questions
**4.2.1 Measurement of time**
We've tested two stages of DynPoint using the same computer setup.
**4.2.2 Neural point cloud**
Your insights are appreciated. Recent findings in Sec. 4.3 (main text), Ablation Studies, confirm the positive impact of our introduced hierarchical structure on overall performance. The three referenced papers offer interesting viewpoints on addressing view synthesis challenges.
In reference to [1], it highlights neural points for video-based Head Avatars reconstruction. Our approach reconstructs point clouds from refined depth and predicted scene flow, differing from [1]'s emphasis on direct learning of canonical point cloud representation. Our research seeks a universal solution for monocular video-based view synthesis tasks, diverging from [1]'s specific focus. Implementing our hierarchical structure might enhance [1]'s approach.
Regarding [2] and [3], their strategies differ from our neural point methodology. [2] introduces front-parallel planes for static scenarios, and [3] proposes spheres for a similar purpose.
**4.2.3 iPhone dataset**
The smaller gaps in the iPhone dataset are due to its limited information from various angles, as indicated by "effective multi-view factors" proposed by DynCheck [17]. This dataset naturally has fewer diverse camera views, making the reconstruction of appearance more challenging compared to the Nvidia and Nerfie.
**4.2.4 Video metrics**
Your question prompted us to consider a new perspective. We've realized that VMAF has broader applications beyond creating new views from videos. It can be useful for evaluating view synthesis in static scenarios as well. It's intriguing that this aspect hasn't been widely explored in existing research. Therefore, we acknowledge the significance of reevaluating and thoroughly examining the suitability of VMAF's features for view synthesis tasks.
**4.2.5 Figure 5**
Figure 5 highlights three failure instances: facial expression, reflection-induced, and fine object view synthesis. Facial expression reconstruction faces challenges due to humans being sensitive to these artifacts. Models like 3DMM improve this aspect.
Reflections can change surface appearances, making it hard to distinguish actual geometry from reflections. Illumination and viewing angles further complicate achieving consistent appearances. Advancements like "NeRFReN" Guo, Yuan-Chen, et al. integrate reflections for better results.
View synthesis for fine objects or high-resolution images is an active research area. Innovative architectures, such as "3D Gaussian Splatting for Real-time Radiance Field Rendering," are designed to solve it.
## 4.3 Limitations
**4.3.1 Fast change \& Thin object**
Indeed, dealing with fast changes and thin objects in scenes poses challenges. Fast changes mean less matching between frames, and thin objects involve complex geometry. Both of these aspects make monocular-video-based view synthesis more difficult.
**4.3.2 Pretraining steps**
Yes, the initial pretraining is vital for our approach. However, we use fewer learnable weights, leading to shorter total training times than other methods.
**4.3.3 Category-specific method**
It's true that incorporating category-specific details can improve the performance of view synthesis models in specific scenarios, e.g., SMPL for human reconstruction and 3DMM for facial expression reconstruction. Our paper focuses on introducing a generalizable algorithm that doesn't rely on specific prior assumptions about objects.
**4.3.4 Scalability**
Scalability, ensuring optimal performance with bigger and more complex tasks or datasets, is a consideration for our model. This affects our model's initial stage in DynPoint, involving explicit correspondence learning. But it's worth noting that other one-stage models face similar hurdles. Our experiments with the iPhone Dataset revealed performance declines across models. One-stage approaches like NSFF aim for a universal video representation, which requires implicit frame correspondence. However, the lack of explicit correspondence adds complexity to implicit correspondence learning.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I want to thank the authors for their effort in addressing my concerns. After reading their response and considering the feedback from other reviewers, I believe that the new experiments provide a comprehensive evaluation of their proposed approach for me, and I maintain a positive view on its acceptance.
---
Reply to Comment 1.1.1:
Title: Response to Comments
Comment: Dear reviewer BEKz,
We extend our sincere appreciation for your invaluable and constructive suggestions. Your feedback has played a crucial role in enhancing the final version of our paper. Your insights and recommendations have greatly contributed to the refinement of our work. Thank you for your time and effort in providing such insightful comments, which have undoubtedly strengthened the quality of our research.
Best regards,
---
Rebuttal 2:
Comment: Dear Reviewer,
Thanks for your valuable comments. Would you please have a look at the authors' rebuttal and other reviewers' comments and share you comments here? Thanks! | Summary: The paper proposes a novel method for performing novel view synthesis from monocular captured videos. This is done by learning scene flow and depth parameters which enable the accurate aggregation of appearance information from nearby frames. The paper proposes a novel method of acquiring both consistent depth estimations for a video, and consistent scene flow represented by an MLP. These two sources of information are used to synthesize novel views at a specific time from an arbitrarily specified viewpoint by sampling from corresponding information in a window of nearby frames. The proposed method is demonstrated to outperform existing methods based on representation learning in synthesizing novel views from monocular videos, and this is shown for a wide variety of datasets.
AFTER REBUTTAL:
I have read the author rebuttal, and believe that it has addressed my questions. I appreciate the detailed discussion on the comparison to DynIBaR. I also believe the additional videos provided have definitely helped improve my opinion of the method, although I wish the method was tested with more qualitative video examples (as mentioned by another reviewer). However, I believe the testing now is sufficient.
Strengths: In my opinion, the main strengths of the paper are that:
1. The presentation of the paper is concise, clear, and thorough.
- The introduction and related work I find to be comprehensive, and accurately describe what the contributions are and what the current issue is with the practice.
- The methods section is described in detail, and is relatively clear to understand and follow.
- The figures are all informative and demonstrate the capabilities of the method. One thing I wish could be added is a video result demonstrating consistent novel view synthesis.
2. The proposed contributions are original and impactful.
- The module for estimating consistent depth for a monocular video seems novel, and while there is existing work tackling this (see questions), the method proposed seems to generate good enough results for novel view synthesis with DynPoint.
- The method for estimating and representing the scene flow between adjacent frames and using this to aggregate information from reference to target frames seems novel (although based on image-based rendering ideas), and learning this from the video seems like an important contribution.
3. The evaluations are sound.
- The evaluations are done for many (4) different datasets, and compare to many different dynamic scene representation methods (Nerfies, D-NeRF, NSFF, HyperNeRF among others). Because of how thorough these evaluations are, and DynPoint outperforms all of the baseline methods, it is very convincing that the proposed method is robust and is able to generate high-quality results.
- The ablations are relatively thorough, and Table 4 shows how the individual parts of the representation and aggregation pipeline contribute to the final quality. I found it to be informative about the method.
Weaknesses: I do not view there to be many major weaknesses of the paper. One thing which would be interesting is a comparison to the method DynIBaR [31]. The paper mentions that this method is limited to short videos due to the capacity of MLPs or other representation mechanisms (L34), but from my understanding this method uses a similar philosophy as this work where new frames are synthesized as inspired by image-based rendering and scene flow estimation. This seems to me to be the current state-of-the-art. Additionally, it’s not clear that the videos this method is evaluated on are long enough for this to be a concern, considering NeRF-based methods can be evaluated and these are constrained by the capacity of the MLP being used to represent the scene. Additionally, I think there could be a bit more discussion on the limitations of the proposed method (see limitations).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - How does the generated consistent video depth estimation compare to other work in this field, for example: https://roxanneluo.github.io/Consistent-Video-Depth-Estimation/. Could this method be dropped in and used for this estimation step?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The paper adequately addresses the limitations of the method. One additional limitation that discussion of would significantly improve the quality of the paper, is the amount of differing from the monocular video path which can be taken and still synthesize reasonable novel views. Obviously, since this method is using appearance information in nearby frames, it is likely to not be able to hallucinate information beyond these. However, NeRF-based methods are able to put something there and potentially reconstruct better results due to the smoothness of the representation learned by the MLP. Some comparison of the existing methods and the proposed method on this axis (range of possible novel views synthesized) would be helpful, and if this is a limitation of the proposed method then it should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Section 3 - Reviewer Rm9i
We thank the reviewer for the constructive assessment of our work. In the following, we address the concerns point by point. Please feel free to use the discussion period if you have any additional questions.
## 3.1 Weaknesses
**3.1.1 Comparison with DynIBaR**
This is an insightful observation. Indeed, our perspectives align closely, and we are concurrent work. Nevertheless, discernible distinctions between DynPoint and DynIBaR exist, serving to underscore the robustness of the proposed DynPoint methodology.
Both DynIBaR and DynPoint harness information aggregation mechanisms to realize the synthesis of novel views. However, DynIBaR predominantly centers around the aggregation of information through two-dimensional (2D) pixel units. This approach draws inspiration from image-based rendering principles, entailing the synthesis of novel perspectives from a collection of reference images via a weighted fusion of reference pixels. In contrast, DynPoint's focal point lies in the information aggregation achieved by constructing three-dimensional (3D) neural point clouds. The final novel view synthesis is realized by using 3D neural points surrounding the queries' position.
Moreover, DynPoint introduces an efficacious strategy for the seamless integration of monocular depth estimation within the ambit of monocular video view synthesis. In contrast to DynIBaR, which endeavors to model the trajectory of all samples (128) traversing each ray, DynPoint exclusively focuses on the trajectory of surface points, thereby yielding a substantial acceleration in both the training and inference stages.
**3.1.2 Long video**
We value your insight. To address this, we examined our model using the iPhone dataset, encompassing numerous images (several hundred frames) across different scenarios in comparison to Nerfie and Nvidia datasets. Our model demonstrated stable performance within this context. However, we acknowledge a minor improvement in the iPhone dataset compared to the significant enhancements seen in Nvidia and Nerfie datasets. This discrepancy can be attributed to the limited information available from various angles in the iPhone dataset, as indicated by the "effective multi-view factors" proposed by DynCheck [17].
To further explore DynPoint's capabilities with longer videos, we followed Review J3AP's suggestion and conducted relevant experiments on the HyperNeRF dataset in the Reviewer J3AP (Sec. 1.1.3), which also contains hundreds of frames per scenario. The results and our algorithm's proficiency in handling longer videos are presented in the attached PDF document. For clarity, we’ve also included a subset of results in the reviewer J3AP (Sec. 1.1.3).
## 3.2 Questions
**3.2.1 Consistent-Video-Depth-Estimation**
Thank you for bringing this up. It's a notable point, as the paper you mentioned indeed provides valuable insights. Both our methods draw inspiration from traditional structure-from-motion reconstruction principles to establish the connection between monocular depth and optical flow. It's possible that their algorithm could be integrated into our depth estimation process.
However, there are distinct differences between our DynPoint approach and theirs. While their focus is primarily on achieving accurate depth estimation, which is also a key aspect of our work. Our method focuses on estimating scene flow and depth information together.
Another difference lies in the requirements of our methods. While their approach involves fine-tuning a complex convolutional neural network-based depth estimation architecture tailored for each scenario, our method calls for fine-tuning a simpler scene flow MLP structure and learnable scale factors of the depth map. This divergence significantly reduces the computational demands during the optimization process.
## 3.3 Limitations
**3.3.1 Limitation concerning global information**
Thank you for bringing this up. Initially, we were concerned about the extent to which global information might impact our method. Unlike approaches such as NSFF or Nerfie, which capture comprehensive scene details for predictions, our method showed promising performance on the iPhone dataset, which contains multiple frames for each scenario. Additionally, the newly added experimental results on the HyperNeRF dataset during the rebuttal period consistently demonstrate the improvement achieved by our approach. The reason for this success might be that even though we only use information from nearby frames to predict stuff, we actually train our scene flow (MLP structure) and render (MLP structure) parts using the whole video, which encodes some global information across frames into our model and empowers our model to make a reasonable inference. More demos on Nerfies and iPhone datasets could be found on the PDF document of the rebuttal to the Associate Chair (AC) at the top of the review page. The video link of HyperNeRF could be found in our official comment.
We also experimented with our model using different numbers of nearby frames, and we explain these results in our ablation studies. What we found is that when we use fewer frames, our model doesn't work as well. But when we increase the number of frames up to a certain point, the improvement becomes less noticeable. One reason for this could be that keeping track of things over a long time doesn't always work perfectly. Another reason could be that the camera and objects move in ways that are hard to predict, so having a longer video might not necessarily give us much more useful information.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed response and clarifications on my questions. I do not have any additional questions.
---
Reply to Comment 1.1.1:
Title: Response to Comments
Comment: Dear reviewer Rm9i,
We would like to express our gratitude for the valuable and constructive suggestions you provided. Your input has been instrumental in enhancing the quality of our paper's final version. Your thoughtful feedback has significantly contributed to the refinement of our research. We genuinely appreciate the time and effort you dedicated to offering such insightful comments, which have undoubtedly enriched our work.
Warm regards, | Summary: This paper proposes an algorithm for novel-view synthesis in dynamic scenes. According to the abstract and introduction section, this paper takes a monocular video as an input (L.98) and cope with uncontrolled or length scenarios (L.3). Using geometric priors, such as monocular depth and optical flow from off-the-shelf methods, this paper exploits the pixel-wise correspondence to encode sceneflow within the network.
Strengths: Overall, I am a bit not that positive for this paper. Let me put more comments on the weakness section.
Weaknesses: _1. Wrong experimental setup._
This paper claims that they focus on rendering scenes from a monocular video to handle lengthy video. However, this is not true. NVIDIA dataset [63] captures the scenes using a camera rig (2x8 array camera? not sure), which is not a monocular video. I think that the correct meaning of the monocular video is addressed by the Dycheck dataset [17] which is called iPhone dataset [17] in this manuscript. Moreover, NVIDIA dataset [63] only has a few training frame which is not well aligned with the addressed points by the authors in the abstract and introduction. At least, HyperNeRF dataset [42] provides more than 200~ frames per scene, where I could say this is a bit lengthy video. Such inconsistency in addressed problems and the proposed solution is quite not good. I am not that satisfied with the quality of the writing.
Moreover, where can I find the qualitative results on Dycheck dataset [17]? While table 3 provides the quantitative results, I could not find any qualitative results.
_2. No video demo_
More seriously, though this paper tackles the neural rendering in dynamic scenes, I could not find any video demo even in the supplementary material. __Lack of video demo is a big problem__.
_3. What is the difference in technical contribution in comparison with NSFF [30]?_
NSFF [30] also exploit the sceneflow understanding within the network, similar to this paper. What is the main difference? At least authors mainly discuss this issue and provide some quant/qual difference within their experimental section. But I could not find any difference or novelty in this part.
_4. Lack of related papers to address the speed in dynamic NeRFs._
This paper is not the pioneering work that provides fast rendering/training speed in dynamic NeRF. I hope that the authors to look at the paper of TiNeuVox, Siggraph Asia 2022 (https://github.com/hustvl/TiNeuVox). If this paper wants to claim the novelty in speed, the authors should have fairly compared with TiNeuVox.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I hope that the authors properly address my concerns which are listed in the weakness section.
Overall, this paper does not provide the whole information that is necessary for the reviewers to properly judge its validity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: It's okay to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Section 2 - Reviewer nc3b
We thank the reviewer for the constructive assessment of our work. In the following, we address the concerns point by point. Please feel free to use the discussion period if you have any additional questions.
## 2.1 Weakness
**2.1.1 Wrong experimental setup**
Weakness Regarding Nvidia Dataset: Your observations are greatly appreciated. We agree with your assessment regarding the limitations of the Nvidia dataset, particularly within the context of monocular video-based view synthesis, a facet also addressed by Dycheck [17]. Despite this, our utilization of the Nvidia dataset is driven by its widespread adoption across related literature in this domain. This deliberate choice enhances the comparability of our algorithm with other approaches.
Inquiries Regarding HyperNeRF Dataset: We extend our gratitude for bringing this matter to our attention. The performance of our algorithm on the HyperNeRF dataset has been incorporated into our comprehensive response, available in the appended PDF document of "global" response. These newly added experimental results on the HyperNeRF dataset consistently demonstrate the improvement achieved by our approach. For clarity, we’ve included a subset of results in the reviewer J3AP (Sec. 1.1.3). The video link of this dataset could be found on "Official Comment" to Associate Chair (AC).
Inquiries Regarding Dycheck (iPhone) Dataset: We appreciate your keen observations. Our response includes the results produced by our algorithm when we tested it on the iPhone dataset. It's important to note that the iPhone dataset presents challenges, and our algorithm struggles to generate clear views in most situations. The demo for Dycheck (iPhone) and Nerfies Dataset could be found on the PDF document of rebuttal to Associate Chair (AC) at the top of the review page.
**2.1.2 Video demo**
We're grateful for your insightful observation, and we recognize how a video demo could really make our paper clearer. Following the instructions for the rebuttal process, we've added an anonymous link to the video demo we've made for the HyperNeRF dataset. You can find this link in our response to the Associate Chair (AC) under the "Official Comment" box at the top of the review page.
**2.1.3 Difference with NSFF**
We want to express our gratitude for your perceptive observation. The NSFF paper has undeniably made a significant contribution to the realm of monocular view synthesis by integrating scene flow into the framework of monocular video-based algorithms. However, there are distinct differences that set NSFF and DynPoint apart.
Firstly, DynPoint introduces a corrective mechanism to adjust the depth scale obtained from monocular videos. This correction greatly improves the consistency of geometry across different frames.
Furthermore, NSFF predicts flow for all points (128) along a ray, while DynPoint focuses exclusively on predicting scene flow for surface points. This approach brings two important advantages: firstly, it adds specific constraints to the scene flow of surface points by using predicted optic flow and depth information, which significantly enhances the learning process; secondly, by concentrating on surface points, the scene flow prediction process is made faster.
Lastly, unlike NSFF, which encodes information extensively within the MLP, DynPoint gathers information from neighboring frames using 3D correspondence. This unique approach empowers DynPoint to more effectively learn from sequences of long videos.
**2.1.4 Comparison with TiNeuVox**
We want to acknowledge your input, which is truly valuable. In our submitted work, we've referenced TiNeuVox and carried out a comparison of its performance with our method using the Nvidia dataset. TiNeuVox is a notable effort that aims to create a more effective way to represent geometry, appearance, and motion information in dynamic situations. While it has its strengths, there are noticeable differences between DynPoint and TiNeuVox.
Firstly, TiNeuVox doesn't explicitly learn about scene flow. Instead, it adopts a method that learns about motion in an indirect manner through a specially designed voxel-style representation. This implicit way of representing motion can be challenging to control effectively during training, which can contribute to its decreased performance in the Nvidia dataset.
Secondly, TiNeuVox tries to capture the entire dynamic scenario within an improved representation. However, this comprehensive approach can face difficulties when dealing with long videos, as it might require more computational resources and memory due to its complexity.
Lastly, we've included a speed comparison in Table 1 of the main text. From this table, we could notice that our model performs faster and achieves higher accuracy when compared to TiNeuVox on the Nvidia dataset.
---
Rebuttal 2:
Comment: Dear Reviewer,
Thanks for your valuable comments. Would you please have a look at the authors' rebuttal and other reviewers' comments and see whether your concerns have been addressed or not?
---
Rebuttal 3:
Title: Comments to Authors
Comment: Thank you for the detailed response. While the rebuttal was clear and to the point, I remain __skeptical__ about the presented contributions and experimental results of this submission.
### 1. Single Video Demo?
I checked the quality of the video demo, but the authors __only provide the easiest case__ with only a single scene (Chicken in the HyperNeRF dataset). Why haven't the authors shared comprehensive qualitative results? Considering this paper emphasizes the quality of the rendered video, it's essential to closely examine its temporal consistency. Based on the content shared by the authors, it's challenging for me to assess if the paper genuinely substantiates its statements on dynamic neural rendering quality and enhanced temporal coherence.
- __In my own trials with TiNeuVox and rendering a scene from a static viewpoint, the outcome appears more consistent than what's demonstrated in this paper.__ Note that the authors do not compare the results by TiNeuVox in the video demo.
- Additionally, it's baffling why the comparisons in the video demo are __restricted to just one paper, NSFF__. The current approach makes me __suspicious__ of the quality of this submission. This paper is not ready for the publication.
### 2. Lack of experiments to support authors' claims
The experiments, in their current form, don't offer significant insights. Upon revisiting the abstract, the authors mention:
- we propose DynPoint, an algorithm designed to facilitate the __rapid__ synthesis of novel views for unconstrained monocular videos
- our method exhibits strong robustness in effectively handling __long-duration videos__ without learning a canonical representation.
In the authors' position, I would prioritize contrasting our findings with TiNeuVox, which boasts quicker training and rendering in dynamic environments. Even though the authors contend that TiNeuVox doesn't utilize sceneflow (potentially explaining the limited comparative data throughout the paper, supplementary material, and rebuttal), I firmly believe there's a need to set TiNeuVox against all datasets, showcasing full qualitative video results. Without this, __it feels like the authors may have overclaimed the strength of the submission about the rapid rendering.__
Regarding longer video scenarios, reviewer J3AP has similarly noted: _"The paper claim that the method could handle long-duration video, which I agree. However, no experiment is conducted to prove this ability."_ I also checked the additional results that the authors uploaded, but __none of them still proves its strength regarding the lengthy-scenario, in my opinion__.
Here's the key.
- If the authors argue that quantitative metrics are higher than those of previous methods, I would say that it looks trivial to me.
- However, if the authors could provide a video demo similar to DynIBaR presented on their project page, I will absolutely vote for acceptance.
As a reviewer, showing the proper qualitative results is also one important factor to judge the acceptance of the submission.
### 3. Lack of analysis on the usage of consistent depth
It quite makes sense to me that the authors extend the monocular geometry cues for learning the sceneflow. Typically, as the other reviewers also commented, using consistent depth estimation is a meaningful approach to the dynamic NeRF setup. However, it is unclear whether using consistent depth is quite effective. In terms of quantitative results, I found the one. However, in terms of qualitative results, I have no idea. This paper targets _not the static scene but the dynamic scene_. So many ablation studies simply provide the number. I am not that satisfied with such a submission.
_Moreover, if the authors want to claim a clear difference compared to NSFF or the novel contribution, the authors should have provided a specific ablation study to alleviate such concerns._
For example, MonoSDF (Song et al., Neurips 2022) also exploits the monocular geometric cues, such as depth maps and surface normal maps. As you can see in the manuscript, it provides tons of qualitative results with detailed ablation studies that strongly supports the authors claim on the benefit of using the monocular cues.
However, in this paper, even though the quantitative results achieve state-of-the-art performance, I believe that showing qualitative results is much persuasive, typically for the dynamic neural rendering task. I hope that the authors could provide bunch of the qualitative results with clear comparison with various baselines. Otherwise, I would say that this paper is not ready for the publication.
---
Rebuttal 4:
Title: To all reviewers and ACs
Comment: I found that the other three reviewers voted for acceptance at the initial rating and I am the only one who is against this paper. If my argument looks too aggressive, let me stop complaining about the contributions of this paper. However, as a reviewer, I cannot agree with this paper at all. I hope that most of the concurrent dynamic NeRF papers present the various and numerous qualitative __video demos__ that strongly support the strength of the contributions.
Let me provide the URLs for the recently-published papers about dynamic NeRF. If the authors provide the proper qualitative results. Let me re-evaluate my scores. If not, I am really not that positive at this moment.
- [NSFF video demo](https://www.cs.cornell.edu/~zl548/NSFF/)
- [Space-time Neural Irradiance Fields for Free-Viewpoint Video](https://video-nerf.github.io/)
- [D-NeRF](https://www.albertpumarola.com/research/D-NeRF/index.html)
- [Dynamic View Synthesis from Dynamic Monocular Video](https://free-view-video.github.io/)
---
Rebuttal Comment 4.1:
Title: Demo video2
Comment: Dear Reviewer,
We appreciate your feedback. Kindly find our second demonstration video provided above. Should you have any additional inquiries, please don't hesitate to discuss here.
Best regards,
Authors of Paper 724
---
Rebuttal 5:
Title: More Demo Videos
Comment: Hello ACs & Reviewers,
We really appreciate your feedback. We know that including more videos can make it easier for you to understand our paper. So, we're working on adding more videos that compare our method with other advanced techniques. As soon as we have it ready, we'll share the link to our new demo video in the official comment box for the ACs.
Thanks again,
Authors of Paper 724
---
Rebuttal Comment 5.1:
Title: Response to Comments
Comment: Dear Reviewer,
Your feedback on our paper is greatly appreciated. We have taken careful note of your comments, and in upcoming iterations, we are committed to enhancing the quality of our demo. Additionally, we are planning to unveil our project webpage concurrent with the conference publication of our paper. This platform will feature a compilation of all the demos we showcased during the rebuttal process, including both **demo1 and demo2 as outlined in our comments to the Associate Chairs**.
Warm regards, | Summary: The paper proposes a method for dynamic scene synthesis from monocular video. The authors aim to speed up training and deal with long-duration videos. To do this, they propose to estimate scene flows of surface points supervised by the signals which are generated by the proposed consistent depth algorithm. Then the information is aggregated to form a neural point cloud and novel view images are rendered by sampling amount this point cloud.
Strengths: 1. The paper proposes a novel solution to dynamic scene. Instead of estimating density and color of the scene for each time, the paper explicitly warps the pixel features of reference images to target time according to the explicitly reconstructed 3D geometry. This would reduce the time for training, as there less learnable components.
2. The experiment shows SOTA results on Nvidia dataset.
Weaknesses: 1. Please consider to rewrite the Equation 3.
2. The results on Nerfie dataset and Iphone dataset are less consistent compared to Nvidia dataset. Why not test TiNeuVox on Nerfie and Iphone dataset? Or test the proposed method on HyperNeRF dataset?
3. Please highlight all best results in Table 2 & Table 3.
4. The paper claim that the method could handle long-duration video, which I agree. However, no experiment is conducted to prove this ability.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the render speed of the method, considering the relative complex neural point cloud construction process.
2. An open question: is it possible to extend this method to a generalizable method, which could generalize to new scenes without training? For example, replace scene flow MLP with an un-learnable method and use a generalizable renderer.
3. Equation 7 is confused. The warped 2K point clouds are summed or combined?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addresed the limitation I concerned, which is the potential failure caursed by the flaws in explicit depths and flows.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Section 1 - Reviewer J3AP
We thank the reviewer for the constructive assessment of our work. In the following, we address the concerns point by point. Please feel free to use the discussion period if you have any additional questions.
## 1.1 Weakness
**1.1.1 Equation 3**
Thank you for pointing it out. The formulation will be adjusted to incorporate the predefined mathematical symbols as follows:
$s_{t-t'}(p_{t})= \hat{d_{t'}}(p_t+f_{t-t'}(p_t)) R_{t'} K_{t'}^{-1} (p_t+f_{t-t'}(p_t)) + t_{t'} - P_t.$
**1.1.2 Performance on iPhone dataset**
Thank you for your careful assessment. The smaller gaps in the iPhone dataset are due to its limited information from various angles, as indicated by "effective multi-view factors" proposed by DynCheck [17]. This dataset naturally has fewer diverse camera views, making the reconstruction of appearance more challenging compared to the Nvidia and Nerfie.
**1.1.3 Tested on HyperNeRF**
We greatly appreciate your reminder regarding HyperNeRF's significance as a pivotal benchmark for evaluating monocular video view synthesis algorithms. We have indeed included an exposition of our algorithm's performance on this dataset within the "global" response PDF document. For clarity, we've included a subset of results here.
### Novel View Synthesis Results of HyperNeRF Dataset
| PSNR ↑ / LPIPS ↓ | Broom | 3D Printer | Chicken | Expressions | Peel Banana | Average |
|----------------------------|--------------|--------------|--------------|---------------|---------------|----------------|
| Hyper-NeRF | 20.60 / 0.613 | 21.40 / 0.212 | 27.60 / 0.108 | 22.00 / 0.196 | 24.30 / 0.170 | 23.20 / 0.260 |
| NSFF | 26.10 / 0.284 | 27.70 / 0.125 | 26.90 / 0.106 | 26.70 / 0.157 | 24.60 / 0.198 | 26.40 / 0.174 |
| DynPoint | **27.40 / 0.248** | **27.60 / 0.163** | **28.10 / 0.089** | **27.90 / 0.147** | **26.50 / 0.129** | **27.50 / 0.155** |
**1.1.4 Table 2 \& Table 3**
Certainly, highlighting all the best results will improve reader understanding. We will ensure that all the best results are emphasized in our final version.
**1.1.5 Long video**
We acknowledge your astute observation. Owing to the constraints imposed by length considerations, the answer could be found for the second weakness posed by reviewer Rm9i (Sec. 3.1.2).
## 1.2 Questions
**1.2.1 Rendering speed**
Indeed, while our emphasis does not lie in rendering speed optimization, our method outperforms standard NeRF-based approaches in this regard. Demonstrating around 2.78-fold acceleration relative to D-NeRF, our technique capitalizes on the point cloud representation to efficiently bypass unoccupied space. Furthermore, the construction of the neural point cloud is remarkably swift, benefiting from the scene flow computation focusing on surface points. This stands in contrast to the computational demands of D-NeRF and NSFF, which necessitate the prediction of motion for all sampled points (192 samples for D-NeRF and 128 samples for NSFF) along the ray originating from each pixel.
**1.2.2 Generalizability**
Thank you sincerely for raising this valuable point. Designing a generalizable model for a monocular-video-based view synthesis task is always challenging. Just as you said, to make DynPoint generalizable, we also need a generalizable scene flow estimation model. We are currently dedicating our efforts to devising a generalizable approach for estimating scene flow without relying on per-scene fine-tuning in our upcoming work.
Nevertheless, we acknowledge that monocular scene flow estimation presents a formidable challenge due to occlusion caused by both the moving camera and dynamic objects, which can hinder accurate estimation. In this context, we have found significant insights from the prevalent usage of diffusion models in 3D scenarios [69,70], which realize generalizable monocular 3D reconstruction. We hope that the development of such a generalizable scene flow estimation model will greatly enhance the applicability of our method, which could serve as a direct replacement for our current first stage.
[69] Zero-1-to-3: Zero-shot One Image to 3D Object.
[70] One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization.
**1.2.3 Equation 7**
Thank you for bringing up this crucial observation. The central concept of our paper is aggregating information from adjacent frames to improve the inference of appearance and geometry information for the current frame. While both summation and combination methods could potentially be utilized for information aggregation, our experiments reveal that summation is not feasible in practice. This is primarily due to the challenge of identifying two perfectly overlapping points in the 3D space with predicted scene flow, which makes the summation unreliable.
To address this concern, we have opted for the combination method, which effectively densifies the neural point cloud of the current frame by integrating information from nearby frames. This approach offers a viable solution to achieve information fusion without encountering the issues associated with direct summation.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns.
---
Reply to Comment 1.1.1:
Comment: Dear reviewers, we appreciate your valuable feedback. We have also uploaded our demonstration for your review. Please feel free to reach out if you have any questions or require further clarification. | Rebuttal 1:
Rebuttal: # Section 0 - Author rebuttal to ACs
Dear Reviewers, we truly appreciate your thoughtful review, which has been immensely valuable in refining our paper. Your insights have contributed significantly to the enhancement of our work.
## 0.1 Experiments on HyperNeRF
In order to comprehensively evaluate the performance of our method, we undertook an additional experiment using the HyperDataset. This dataset offers several hundred frames for each scenario, enabling a thorough assessment. Our findings from this experiment reinforce that our algorithm consistently exhibits improvements across nearly all scenarios. Some of the experiment's results are in the reviewer J3AP (Sec. 1.1.2) and you can find the full results in the PDF attached.
## 0.2 Video demo
Following NeurIPS 2023's guidelines for responding, we're able to provide the video by sharing a link in the official comment to the Associate Chair (AC). As a result, we've included the link to our demonstration video in our "official comments" to ACs. This video showcases our perspective on the HyperNeRF dataset. In the video, we start by training our model and NSFF (The second best-performing model on the HyperNeRF dataset) using the left video. Then, we show how NSFF generates the middle video by keeping the view constant but changing the input time. Similarly, the right video is generated by DynPoint, where the view is fixed, but the input time changes.
## 0.3 Qualitative results on Nerfies and iPhone
To better illustrate our algorithm, we've included additional examples of the results it produces on the Nerfies and iPhone datasets in our attached PDF (Figure 1 for Nerfies and Figure 2 for iPhone). When comparing DynPoint and NSFF on the Nerfies dataset, we can clearly see the noticeable differences, which align with the findings we presented in our main paper. Moreover, the comparison on the iPhone dataset showcases our algorithm's edge in handling lengthy and intricate dynamic scenarios. Moreover, we recognize that the iPhone dataset poses a unique challenge due to its intentional design. This design intentionally limits the available multi-view information for each frame. As a result, this complexity makes it more challenging for both the correspondence learning and 3D reconstruction processes.
Once again, we extend our sincere gratitude for your insightful review, and we are fully committed to addressing any further inquiries or suggestions you may have.
Pdf: /pdf/8bfee3524853da0d62e55f88d2edf8c8d557485d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Scaling Data-Constrained Language Models | Accept (oral) | Summary: This paper studies an important empirical question: what is the optimal tradeoff between LLM size and the amount of data available for training it. The authors follow an existing line of work (e.g., Chinchilla), and additionally consider the role of data repetition (via multiple epochs), e.g., what is the effect of training a model for K epochs on N tokens each, compared to training it for one epoch on K*N unique tokens. The authors adjust the power law equation introduced in Chinchilla to take into account the “effective data”, which considers the lower value of repeated tokens, and similarly the “effective model size”, which considers the lower value of parameters trained on such data. They present an overwhelmingly large set of experiments, some VERY expensive (the total amount of compute spent on this work is 4.5 million GPU hours). This makes this study practically irreproducible for 99.999% of the community on the one hand, but very valuable still to those training such huge models. The conclusions reached by the authors are interesting: e.g., models can be trained on data that is repeated for up to 4 times, and reach a similar loss as models trained on a similar amount of unique tokens. They also study the effect of augmenting text corpora with code, and applying deduplication filtering steps.
Strengths: - An important empirical question.
- A potentially more accurate modeling of the params/data scaling law.
- A very large set of experiments covering a range of models and scenarios.
- Some interesting and non-trivial observations.
- Paper is generally well written
Weaknesses: - There is something unclear about using the validation or the test loss. The authors say (#42) that they are reporting test loss, but many of the tables and graphs say that they are showing validation loss. Which one is it? Reporting test loss is a problem, as it might lead to overfitting the test data (e.g., by others who want to adapt the proposed recipes).
- The loss differences are often very small. E.g., in Fig1 (right), a difference of 0.006, or in Fig5 (left). Are these results significant?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I am not sure I entirely understand the data repetition setup. In particular, the caption of fig2 says “Training runs with different epochs reuse subsets of the same data to ensure different training data is not a confounding factor.” What does subsets of the same data mean? Is the data sampled from the training data in each repetition? Doesn’t that mean that the data is not effectively repeated?
Typos and such:
- #132-133: I think R_D and U_D in these lines should be R_N and U_N?
- #147: [65]partir
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review.
Weaknesses:
> There is something unclear about using the validation or the test loss. The authors say (#42) that they are reporting test loss, but many of the tables and graphs say that they are showing validation loss. Which one is it? Reporting test loss is a problem, as it might lead to overfitting the test data (e.g., by others who want to adapt the proposed recipes).
We are sorry for the lack of clarity. We have rephrased the mention of test loss to “we report loss on a held-out test set unless otherwise specified”. In the main text, only Figure 4 reports validation loss throughout training instead. We have also added a detailed explanation of how loss is computed in a new section Appendix K “Evaluation Details”. We are not sure why reporting the test loss would be a problem. We are using a fully held-out dataset and compute the loss over it for each model at the end of training. Apart from loss, we also show that our findings are mirrored in downstream performance in the Appendix. Given that all our models will be open-sourced, future work can also run them and compute loss on their own datasets if desired.
> The loss differences are often very small. E.g., in Fig1 (right), a difference of 0.006, or in Fig5 (left). Are these results significant?
We have updated Fig1 (right) to larger models with 8.67B and 6.34B parameters. The loss difference for that setup is 0.017. At these scales, loss decreases very slowly, for example for the 8.67B model, its last 5,000 steps ($\approx$10B tokens) correspond to about a 0.017 loss decrease. Thus, we consider it to be significant. We have also added downstream performance of these two models in the Appendix. Across our 19 downstream evaluation tasks, the 6.34B model has an average score of 25.9, while the 8.7B model scores 23.5. This reinforces our findings that data should be scaled faster than parameters in the repeated regime.
Questions:
We are sorry for the confusion. We meant to say that if we have e.g. 10B unique tokens, those same 10B unique tokens are also a subset of every setup where we have >10B unique tokens. This ensures that as much as possible of the data is shared across runs and our results are not impacted by simply having trained on a better data subset. We have rephrased the caption of that Figure as “We ensure that runs using less data (more epochs) always use a subset of the data used in runs with more data (fewer epochs).”.
Typos:
- #132-133: The behavior for R_D = R_D* is the same as the behavior for R_N = R_N* due to the symmetry hence either one is fine in these lines.
- #147: Thank you for noting, we have fixed it.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications, I am happy with the responses. | Summary: This paper demonstrates the scaling law both mathematically and experimentally in data-constrained situations. They analyze three experimental protocols: allocation (fixed unique data), return (fixed FLOPs), and parametric fit, and obtain new conclusions that were not found in previous research. For example, they demonstrate that increasing the data repetition (epochs) is more effective than increasing the parameters when the number of unique tokens is fixed, and the performance drop is shown to be negligible when there is a slight decrease in unique data within a predetermined number of FLOPs. They also analyze the effects of code data augmentation and data filtering strategies that can be applied in data-constrained situations.
Strengths: - The research is well-motivated and necessary.
- They present scaling laws and propose optimal training strategies in a situation where data is limited.
- They derive reliable and generalizable conclusions through extensive experiments.
Weaknesses: - Increasing the maximum parameter size (e.g. >= 10B in Chinchilla [1]) would further enhance comprehension of the scaling law in large language models.
[1] Hoffmann, Jordan, et al. "An empirical analysis of compute-optimal large language model training." Advances in Neural Information Processing Systems 35 (2022): 30016-30030.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Since many large language models utilize IF (instruction fine-tuning) and RLHF (reinforcement learning from human feedback), do authors have plans to expand research on scaling laws of data-constrained language models in the IF or RLHF scenario, where the data includes instruction or human feedback data?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors adequately suggest limitations in various aspects.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We agree that future work confirming these findings at >10B would be very interesting. The Galactica 120B model from prior work does provide preliminary evidence that you can repeat at these larger scales, but more experiments are needed.
Questions:
> Data-Constrained IF (instruction fine-tuning) and RLHF
* While we are very interested in IF and RLHF, we think the scaling environment for these problems is different enough to warrant an analysis outside of the Chinchilla regime. For IF, often the datasets are quite small and training looks more like traditional ML training (for example the LIMA dataset from Meta only contains 1000 examples). For RLHF, the reward model is "data-constrained" in the sense that annotations are costly. However this model looks more like a classification/regression model, and would likely have different scaling properties than next-token prediction models. Furthermore, published models only scale up to the order of 1M examples which is a different order of magnitude than these examples. Scaling the RL phase of the system is quite interesting, however, this phase also has a different objective and training procedure (PPO).
---
Rebuttal Comment 1.1:
Comment: Thank you for updates and answer about the question. I raised my score after reading the rebuttal. | Summary: This paper investigates scaling properties of languages, in the presence of *repeated data*: plenty of work has studied the scaling properties of LLMs (and ML architectures in general), both with respect to the number of parameters and amount of data, but they generally assume each new datapoint is unique. However, as newer LLMs start to reach the limit of available (internet) data, repeated use of the same datapoints sets will become more common and understanding the "worth" of a repeated datapoint is a very important research question.
The authors conduct extensive experiments, training hundreds of model ranging from 10 million to 9 billion parameters and trained up for up to 1500 (repeated) epochs. They then propose a novel *data-constrained* scaling law by introducing a multiplicative factor the data term that decays with the number of repetitions and (briefly) validate it, showing a better fit than the typical Chinchilla scaling law in the presence of repeated data.
They then use their proposed scaling to study two questions:
1. *Allocation: What is the optimal balance of (compute/data) resources?*
Their results suggest that its better to first scale epochs/number of repetions, and only then parameter count: by repeating data, they are able to train a comparable model (in terms of performance and compute budge) with 27% less parameters.
2. *Return: What is the expected value of additional resources?*
They discover that while there are diminishing returns to adding more compute resources in the presence of the same training dataset, there's a notable value in repeating data: training up to 4 epochs yields almost the same loss as training on unique data, but beyond that the value of adding compute resources to train further effectively drops to zero.
Lastly, they explore several methods to address data limitations without the need to produce new natural language data. They found that incorporating code tokens into training data yielded a two-fold increase in effective tokens, even for tasks solely evaluating natural language. They also explored varying data filtering techniques, concluding that data filtering primarily yields substantial benefits on noisy datasets, and not so much on cleaner ones.
Strengths: This was very interesting read, and is quite well written. I think this work has the potential to have big impact in the LLM pretraining community: the limits of unique data we can get from the internet are already being reached, but the findings in this paper suggest that won’t necessarily be a problem for a while, and that repeating tokens a few times to obtain an optimal capacity-data compute allocation is probably fine. The experiments on filtering even hint that it might be better to even be more aggressive in current filtering and repeating the cleaner tokens. Their evaluation on Section 7 is also quite strong, using downstream performance, and the findings with code are insighful and also super relevant to the pretraining community, and further confirm “implicit” knowledge in the community.
Weaknesses: The only slightly more serious flaw I see is that the proposed scaling law is not well validated: from what I see, in the main paper only a single plot showing the fit is shown, and no comparable fitting metrics (like r2), commonly reported in scaling law papers, are given to quantitatively compare to the unique-data scaling laws.
I think the paper could also be improved by making the theoretical discussion more concise and moving some of the (very interesting) findings from the appendix to the main text.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough summary of the work and careful review.
> by repeating data, they are able to train a comparable model (in terms of performance and compute budget) with 27% less parameters.
We have updated the main plot (right) to larger scales comparing a 8.7 billion parameter model and a 6.3 billion parameter one. The difference is even more significant now with the smaller model performing better. We further show that the difference between these two models is also significant in terms of downstream performance. Thus, we would go beyond calling them comparable but rather that the smaller model is indeed better.
> The experiments on filtering even hint that it might be better to even be more aggressive in current filtering and repeating the cleaner tokens.
This is a great observation. Indeed the models that repeat C4 twice are better than the models trained on OSCAR (Appendix) with just one repeat.
**Weaknesses:**
> ..fitting metrics..
We have added R^2 and loss for the different versions of our exponential decay formulation as well as applying no decay at the end of Appendix A. Thank you for this suggestion.
> I think the paper could also be improved by making the theoretical discussion more concise and moving some of the (very interesting) findings from the appendix to the main text.
We agree with the sentiment that much of the Appendix could be part of the main text, however, we are unsure how to reduce the theoretical discussion to make space for that without sacrificing content necessary to understand the main text. If you have specific suggestions, we are open to revising the text based on them.
---
Rebuttal Comment 1.1:
Title: Response
Comment: > If you have specific suggestions, we are open to revising the text based on them.
I guess my (lazy) suggestion is just arxiv'ing a longer (8+) pages :) | Summary: This paper studies the scaling behavior of language models by repeating the training data to multiple epochs. The authors extend the recent successful Chinchilla scaling law to a data-constrained regime by adding exponential decay of data and model terms then fit the empirical observations. The key takeaway is that repeating data up to 4 times yields negligible changes to loss compared to having unique data, consistent with the observation of Galactica. The authors further studies additional methods like adding code data to 50% and find out this does not compromise model performance on text tasks.
Strengths: This paper studies a very important problem (what should one do to further scale up data when data is used up) and offers very practical advice (repeating the data 4 times and using code 50% to the training data). I believe this paper will make a clear impact in the area of language models and general AI.
Weaknesses: Although this paper has clear advantages in terms of practical guidance, I am concerned about the following weakness:
The choice of the specific parametric form in equation 5 and 6 are not clearly explained and may be subject to debate, specifically:
- Why the decay in equation 5 should be exponential (other than, say, polynomial) as when data points are small, exponential may look similar to polynomial (or other functional forms).
- Why the decay in model scale (Eq. 6) should follow the same form as data (Eq. 5). For the original Chinchilla scaling law, the two terms taking symmetric forms are understandable due to their nature. But for repeating data, model parameter and data scale are not as symmetric as unique data setting, thus considering the same form of decay as data (though I do believe there should be a certain form of decay in model scale) might be less justified
Different types of data may need different levels of repetition.
- The Chinchilla model mostly considers webpages v.s. the Galactica model mostly consider academic papers. Intuitively, the level of complexity of webpages might be lower than papers, and one may want to repeat the complex data over simple ones.
The conclusions hold for smaller models that are less than 10B may not hold for models that are larger than 65B
- This is actually my largest concern with regard to this paper. Empirically, at least for fine-tuning, people have observed that larger models do not require as many fine-tuning epochs as smaller models. In general, models that are smaller than 10B has different behaviors than models larger than 65B
- An alternative explanation for the repeating 4 times conclusion drawn by this paper could be: smaller models do not have enough capability to fit the data with only one pass, but larger models may be able to do so.
The conclusion holds for data scale that is less than 100B may not hold for data scale that is larger than 1T.
- Most of the experiments in this paper consider training models with less than 100B data. In the current practice, models (even if they are just 7B) are mostly trained with 1T data (either code or text). This means that in practice, the problem is whether repeating 1T data 4 times, rather than repeating 10B data 4 times.
- As the data becomes large, a possible model behavior could be: with 10B data, because data is relatively small, repeating is beneficial, yet when data becomes large, repeating may not be as beneficial as the small data regime.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The authors reported that WebNLG and BaBI immediately improveSee about discussions as one mix in code data. I am curious about what kind of synergy between text and code data and what are the families of text tasks that would benefit the most from mixing code?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: See above discussion about weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thought-provoking points.
Weaknesses:
Choice of the parametric form:
- Decay: We agree that these experiments do not rule out other parameteric forms, such as polynomial. We think exponential was an intuitive starting point out of possible decay formulations and the sharp dropoff in data value (which rules out forms like linear decay). We stuck with it given that it fits our data well with few parameters.
- Parameters: While we do not have a mechanistic explanation for this term, our hypothesis why this decay is necessary is that if you are trying to fit an over-parameterized model to little data then parameters learn redundant features, and so they end up "repeating" in a similar way to the data.
Different types of data:
- We agree that different types of data is an important factor, although it is challenging to change dataset genre, while controlling for scale. We do run experiment with another large-scale web dataset, OSCAR, which has a different data distribution with less filtering. Despite this difference we found the trends to be about the same. While its difficult to define a general-purpose scalar value of complexity, controlled experiments based on this factor is certainly an interesting research direction for future work. (And personally it would be refreshing if quality had a non-linear scaling factor compared to quantity.)
Larger models:
- We agree that it is important to further validate these findings at even larger scales than our 8.7B models. We already spent a significant amount of compute for the experiments at hand, so we leave this to future work.
- We investigated 3 model scales in detail (2.7B, 4.2B, 8.7B) and found very similar behavior. However, it could indeed be that there are fundamental changes when going significantly beyond 10B.
More data:
- We have swapped the models in the first plot (right) for larger models trained on more data: 8.67 billion parameters for 178 billion tokens and 6.34 billion parameters trained for 242 billion tokens. This has reinforced our findings as we see the same behavior of many epochs still being beneficial (9.7 epochs). We are indeed not hitting the trillion token regimes and it is important for future work (with access to even more compute) to investigate this.
- This depends on the model size. E.g. with 10B tokens and a 1B model, you will get lots of value from repeating. However, with 1T tokens and a 1B model, then the model will already be close to its maximum capacity at 1T tokens, so repeating is expected to have less value. If we adapt the model size to be bigger in the second case and have more capacity, then our current findings suggest that we can get the same value out of repeating as in the smaller case.
Questions:
Our intuition is that code data benefits tasks that require long-term state tracking. Here is an example input from bAbI: “Sandra travelled to the office. Sandra went to the bathroom. Mary went to the bedroom. Daniel moved to the hallway. John went to the garden. John travelled to the office. Daniel journeyed to the bedroom. Daniel travelled to the hallway. John went to the bedroom. John travelled to the office. Where is Daniel?” The correct answer is “hallway". Without code data, the models benchmarked are not able to solve this task. However, as soon as code data is added the ability of state tracking appears to emerge and they can solve such tasks. We believe that this could be due to the necessity of keeping track of variable states in order to accurately predict code. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thorough reviews. We have made the following updates to the paper:
Increased the scale of the models in the first plot (right) showing even more significant results in support of scaling data faster than parameters when repeating (also in terms of downstream performance shown in a table in the Appendix). As we cannot upload a new version of the paper, we’re attaching the plot and table to this message.
Added several new Appendix sections:
- Appendix G: Case study on how Galactica should have allocated compute according to data-constrained scaling laws
- Appendix K: Details on how loss is computed, how scores are normalized and smoothing for the graphs
- Appendix P: Loss curves of complementary strategies
Improved several Appendix sections:
- Appendix A: More details on the calculation of $U_N$ and comparison of different fits and their loss and $R^2$
- Appendix F: Added experiments on an alternative formulation to let excess parameters hurt performance by decaying alpha and beta
- Appendix I: Fixed one loss plot for OSCAR
- Appendix Q: Added hyperparameter sensitivity as a new limitation
- Appendix R: Specified some more hyperparameters
Rephrased several sentences throughout and fixed typos.
We agree with the overall sentiment expressed by the reviewers that there are many exciting related research questions, such as behavior at hundreds of billions of parameters and trillions of tokens, different datasets beyond C4 and OSCAR and data-constrained scaling laws during finetuning. We look forward to answering these together with the broader research community in future work.
Pdf: /pdf/28bb725641494e45c5caaf8ab528ebc804b5690a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
What You See is What You Read? Improving Text-Image Alignment Evaluation | Accept (poster) | Summary: This paper aims to develop a method for evaluating the level of semantic alignment between text and image. In order to achieve this, the authors construct datasets of image and text pairs, and collect human judgments to determine if the text and image are semantically aligned. The paper proposes two methods for estimating alignment between text and image. The first method generates questions and answers from text and checks whether a Visual Question Answering (VQA) model provides consistent answers to the questions. The second method is an end-to-end classifier that predicts if the image entails the text. Experimental results demonstrate that the proposed methods outperform existing multimodal models.
Strengths: - A significant contribution of this paper is the collection of human judgments on text and image alignment. This will aid in the development and validation of alignment evaluation methods in future research.
- The annotation pipeline is carefully designed, and the details are throughly described.
Weaknesses: - The VQ^2 method depends on the sampling of questions and answers, which may result in output variation due to chance. The effects of sampling are not discussed in the paper.
- Language models used to generate contradicting captions may introduce biases in the datasets. For example, generated captions might have typical wording patterns that are difficult for humans to detect. Validating synthesized datasets is challenging.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Can the authors provide more details about the PaLM instruction used to generate contradicting captions? Was the prompt in Figure 2 utilized to rewrite captions?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses two limitations:
1. The difficulty in judging alignment between a text and an image even for humans.
2. The challenge of filtering offensive content from the dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the value of our human image-text judgments, that will aid in the development and validation of alignment evaluation methods.
### Variability in $VQ^2$ Sampling [weakness 1]
Our strategy for question-answer pair generation is based on established works like $Q^2$ and $VQ^2A$. We choose answer candidates by extracting noun phrases using the SpaCy toolkit so they are not random. For the QG we do beam-search, not sampling, but generating 5 questions per answer and filter out low quality question and answer pairs.
### Validating contradictions generated by the LLM [weakness 2]
For our SeeTRUE test set, the labels are a product of a majority consensus among three annotators. This yielded a high inter-annotator agreement, as documented in Table 6. Such consistency among annotators signifies a shared understanding of the semantics inherent in the generated captions. We agree that generating the contradictions with different LLMs would reduce biases of a single LLM. In light of your and reviewer j32y feedback, we tested GPT4 for Generating Contradicting Captions, which yielded comparable results to our initial model. We believe that the combination of several LLMs and VLMs in the ConGen methodology reduces biases introduced by a single LLM.
When considering the training set, we acknowledge that it contains a proportion of auto-labels. However, as depicted in Table 6, any noise introduced is minimal. More crucially, even when such noise is present, Table 2 shows that fine-tuning on this synthetic dataset leads to enhanced performance on SeeTRUE, which employs human-annotated labels.
### PaLM instruction [question 1]
You're right. The prompt illustrated in Figure 2 is precisely what we deployed to produce contradictions. This was done by running the prompt with multiple few-shot examples, like the three presented in Figure 2. By altering specific elements (e.g., "knife" to "spoon" or "rainforest" to "desert"), we applied this to the five input captions using the LLM. This procedure is adaptable and can integrate other LLMs like GPT4 or Stable Beluga, among others. Once the contradictions were generated, we employed an NLI model to rank each one, selecting the “most contradicting” candidate which is identified by the lowest NLI score.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thank you for your responses.
The rebuttal provides clarification to my questions.
I am increasing my score. | Summary: This paper addresses the problem of text-image alignment evaluation by proposing a benchmark named SeeTRUE and two alignment metrics named VQ^2 and end-to-end VNLI.
SeeTRUE covers a variety of real and synthetic images and captions. The synthetic captions, which are generated by large language models (LLMs), preserve most words from the original captions but convey a contradictory meaning. The synthetic images are generated by text-to-image generative models. The real image and captions are collected from existing datasets. In addition to the image-text pairs, SeeTRUE also provides binary human annotation of the alignment between these pairs.
VQ^2 first generates question-answer pairs based on the caption and then assesses question-answer pair alignment against the image. End-to-end VNLI directly fine-tunes a visual NLI model to determine whether an image entails a caption.
Experiments show that the proposed image-text alignment metrics outperform state-of-the-art (SOTA) vision-language models on the SeeTRUE benchmark. The VQ^2 metric also exhibits (1) a high correlation with humans in the evaluation of text-to-image generative models and (2) strong potential as a reranking technique to select generated images.
Strengths: * The proposed SeeTRUE overcomes two limitations found in existing benchmarks: a primary emphasis on real images and a lack of challenging negative captions. SeeTRUE provides a valuable testbed for evaluating image-text alignment methods.
* The proposed two image-text alignment metrics are well-designed and exhibit very promising results, which can facilitate the evaluation and development of image captioning and text-to-image generation models.
* The experiments are comprehensive and well-designed.
* The paper is well-written and easy to follow.
Weaknesses: The current manuscript still has three problems (which do not warrant rejection):
* SeeTRUE primarily focuses on the challenging scenario where negative captions differ from positive captions by only a few words. However, it remains unclear whether the proposed VQ^2 and End-to-end VNLI models, specifically tailored for this setting, can still surpass the strong baselines in the standard scenario where negative captions describe completely different images.
* Considering the utilization of three models—namely, an answer generation model (T5-XXL), a QA model (T5-XXL) and a VQA model (PaLI-17B)—VQ^2 might incur higher computational costs compared to the baseline methods. It would be beneficial to provide a detailed account of the computational expenses of these methods in Table 2.
* It seems that the results in Table 7 of Appendix A3 are missing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Please refer to the “Weakness” section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our methods can facilitate the evaluation and development of image captioning and text-to-image generation models.
### Evaluation Scenarios with Different Negative Captions [weakness 1]
SeeTRUE's diverse architecture encompasses seven distinct test sets. Three of them, namely Winoground, COCO-Con, and PickaPic-Con, operate on a "few-words" change dynamic. Conversely, the remaining datasets, such as SNLI-VE, DrawBench, EditBench, and COCO t2i, explore alternative evaluation angles. For instance, SNLI-VE extracts text descriptions from *entirely distinct images*, while COCO t2i offers a *broader semantic spectrum* due to its text-image generation capability. These variations ensure SeeTRUE’s capacity to evaluate alignment over a wide range of text perturbations, and not just simple lexical changes.
### Computational Costs of $VQ^2$ vs. VNLI [weakness 2]
Indeed, $VQ^2$’s sequential pipeline can present more computational overhead compared to simpler models like VNLI. However, this modular approach is advantageous for a few reasons:
1. **Robustness**: As a zero-shot model, $VQ^2$ steers clear of shallow heuristics during training, ensuring a comprehensive evaluation.
2. **Interpretability**: The $VQ^2$ model can effectively pinpoint the questions that majorly contribute to misalignments. This is evident in Figure 4.
3. **Data Generation Capability**: $VQ^2$ doubles as a synthetic data generation tool. Typically employed offline, it can generate pertinent questions and answers for datasets consisting of images and texts. As we work on optimizing its latency, we're confident of further enhancing its efficiency.
We are continuously working to refine the computational efficiency of $VQ^2$, ensuring its alignment with real-world applications.
### Table 7 appendix missing results [weakness 3]
We appreciate you pointing this out. We have since addressed and rectified this oversight.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thanks for your response. My first concern is well addressed. As for the second concern, I agree that sequential pipeline is advantageous in the three aspects. However, I think it would be better to report the computational cost (e.g., time cost and the amount of GPU memory used) in the main results of Table 2.
---
Reply to Comment 1.1.1:
Title: Computational Costs
Comment: Thank you for your feedback. In response to your suggestion, we have now incorporated a detailed computational cost section within the paper. Below is a brief summary:
| Aspect | PaLI | $VQ^2$ | BLIP2 |
|----------------------|-------------------------------------|------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Inference Time | 500ms per image-text pair | 40 seconds per image (full pipeline) | 750ms per image-text pair (as measured in [source](https://arxiv.org/abs/2303.11897)) |
| Model Parameters | 17B parameters | T5-XXL - 11B parameters + PaLI 17B | 12B parameters |
| Hardware Requirements| Four v4 chips (Jouppi et al., 2020) | T5-XXL: 16 TPU v3 cores + PaLI: 4 v4 chips | GPU with 24GB as reported in [HuggingFace](https://huggingface.co/spaces/Salesforce/BLIP2/discussions/2#:~:text=The%20hardware%20requirements%20depend%20on,up%20to%2024Gb%20during%20inference.) |
| Framework | T5X (Roberts et al., 2022) on JAX (Bradbury et al., 2018) | T5X (Roberts et al., 2022) on JAX (Bradbury et al., 2018) | Pytorch | | Summary: The authors propose a benchmark and two methods for evaluating fine-grained and complex image-text alignment. Their benchmark involves multiple distracter captions and images involving both real and synthetic images and captions. They propose two methods - one evaluates image-text alignment by asking multiple entailed questions, and another is a large Vision-Language model fine-tuned on their benchmark image-caption data.
Strengths: Evaluating by visual entailment is neat since it is interpretable.
They also show that they outperform a contemporary benchmark TIFA that does the same.
They release a comprehensive benchmark, a combination of available data and annotations they collect, which will be valuable to the community for evaluating vision-language models.
Weaknesses: Sometimes certain design choices are not well motivated and make the reader wonder why such a choice was adopted. For example, when generating qa pairs, the question is answered using a t5 Language model first to filter some qa pairs. Is this to ensure the answer semantically makes sense for such a question in general? If so, writing out clear motivation before explaining what is being done might be good.
The benchmark seems to be a collection of simply more human-annotated image-text data. Motivating why sometimes synthetic images or text is required and what are new ways one can use this to evaluate their models (that current benchmarks lack) would be nice. For instance, with the contrastive captions, one can evaluate compositionality. However, we can already do this with Winoground; what extra does this benchmark give us?
Some other relevant image-text benchmarks that evaluate compositionality using similar ideas of distractor captions are CREPE (https://arxiv.org/abs/2212.07796) and COLA (https://arxiv.org/abs/2305.03689). It might be good to add in a discussion of how this benchmark relates.
(minor) Even though synthetic images and texts are human evaluated and filtered, the datasets may be biased to having images that current generation models can already generate well, limiting the application of evaluating image generation models on the dataset on edge case prompts.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What are the train, val, and test splits of SeeTRUE? If I understand correctly, the VQ2 models are not trained and are just evaluated on the test split. However, the VNLI is trained on the train splits of SeeTRUE and then evaluated on the test splits. Is that correct?
How many questions are asked for a given image-text pair in the VQ2 method?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discuss limitations sufficiently
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our approaches and the SeeTRUE benchmark as valuable to the community for evaluating vision-language models. We will address your comments and questions.
### Clarity on $VQ^2$ Design Choices [weakness 1 and question 2]
We've elaborated on the motivations underlying $VQ^2$ in Section 3.1. Further, Appendix A.3 offers insights into other $VQ^2$ variants we explored.
Our strategy for question-answer pair generation is based on established works like $Q^2$ and $VQ^2A$. We leverage a Question Generation (QG) model, and to refine the output, a Question Answering (QA) model is deployed against the text, thereby filtering irrelevant or low-quality questions.
### Benchmark's Value as a Comprehensive Dataset [weakness 2&3 and question 1]
Our benchmark's unique blend of natural & synthetic images and text differentiates it from Winoground, CREPE, and COLA. This composition permits more robust system evaluations. While Winoground adeptly assesses compositionality, our approach, with its contrasting captions, hones in on finer details such as color, objects, and image composition.
Notably, advanced chatbots like Bing Chat now serve users by either sourcing existing images or generating new ones “on the fly”. This evolution underscores the importance of real text and prompts in contemporary evaluations. Furthermore, there's a shift towards synthetic/generated captions in academia, highlighted by studies like "Improving Multimodal Datasets with Image Captioning" (https://arxiv.org/abs/2307.10350). Such research showcases how machine-generated captions can even outperform human annotations, thus elevating performance across diverse tasks.
Our SeeTRUE benchmark, particularly with synthetic datasets like DrawBench and EditBench, presents genuine challenges to image-text alignment methodologies. By using various text-to-image models, several data sources, and diverse prompts, we aim for a rich image-text example distribution. The inclusion of human-annotated pairs ensures our benchmark challenges text-to-image models with nuanced misalignments.
### Details on Training & Testing [question 1]
Our primary objective for SeeTRUE was to establish a high quality, diverse benchmark for image-text alignment. As you've rightly noted, while the $VQ^2$ method undergoes evaluation on the test split, we utilize SNLI-VE train and validation splits for the VNLI model. Our training set further benefits from additional data, detailed in section 3.2 and Appendix A.4. | Summary: This paper introduces SeeTRUE, a benchmark for evaluating image-text alignment, encompassing a diverse range of both real and synthetic images and text. The authors proposes two innovative approaches to evaluating alignment: VQ2, which relies on question generation and visual question resolution, and VNLI, which relies on fine-tuning substantial multimodal language models. The suggested methods perform better on a variety of alignment tests than earlier methods, particularly in difficult scenarios involving complicated composition or strange images. The study also shows how these techniques may rerank generated image candidates and pinpoint particular misalignments. This provides alignment evaluation methods for image-to-text and text-to-image models.
Strengths: - This paper introduces comprehensive benchmark, SeeTRUE covers real and synthetic text and image pairs from a variety of different tasks, allowing for a more thorough evaluation of text-image alignment models. This could be a solid contribution on this field.
- The paper presents two new text-image alignment evaluation techniques, VQ2 and VNLI. these methods outperform previous ones, especially in complex or unnatural images.
- The paper is clearly written, and easy to follow
- The author shared part of their codes
Weaknesses: - Because performance is dependent on the LLM, the comparison doesn't seem fair.
- Also, as Generating Contradicting Captions (even with LLM) isn't a new concept, there isn't much insight to be gained from this paper.
( for example, Momeni et al. https://arxiv.org/abs/2304.06708 )
- There aren't many examples. Even the appendix is not sufficient. It would be better if there were more examples.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: -Regarding the Dependency on the LLM, the methods largely depends on the quality of the LLM and language-and-vision model. Does this mean that the proposed methods would perform poorly if the model is not good enough? Can you provide a comparison of how these methods perform with different LLM and VLM models?
-On the Novelty of Generating Contradicting Captions, Since the generation of contradicting captions is not a new concept, how does the proposed negative mining with LLM methods in this field? Is there an additional aspect of the method that brings unique contributions to the field of text-image alignment?
-Can these methods be adapted for other tasks that involve the interaction of text and image
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The author addressed the limitations of this work. The authors could discuss the limitation of prompting LLM in more detail, as well as suggest ways to mitigate it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the contribution of the SeeTRUE benchmark to allow a more thorough evaluation of text-image alignment methods, and for recognizing our approaches as innovative. We will now address your comments and questions.
### Dependency on LLM/VLM Quality + LLM Limitations [weakness 1 and question 1 + limitation]
Our ConGen method does depend on an LLM, but with the increasing availability of robust LLMs like GPT4, Stable Beluga, Vicuna, Falcon, LLAMA2 etc. our methodology can be applied by others.
Based on your feedback, we explored the capability of GPT-4 in producing Contradicting Captions. We conducted an evaluation on Amazon Mechanical Turk, engaging three annotators to ascertain if the contradiction candidates correctly contradicts the image. The results are on par with our original model, indicating that our ConGen method is compatible and effective with other leading language models. The full analysis is presented in Appendix A.5 in the revised paper.
As per VLM models, our results (Table 2) illustrate that the VNLI model based on PaLI excels over the BLIP-2-based model, where both are trained on the SNLI-VE dataset. Additionally, the quality of VNLI can be further improved with more extensive fine-tuning.
We also added the results of using VQ$^2$ with BLIP-2 instead of PaLI in Appendix A.6. Regarding the QG and QA models used in the V$Q^2$ method, in the Q$^2$ work (https://arxiv.org/abs/2104.08202) Table 8 presents results with T5-small instead of T5-XXL (11B) and the results are inferior.
### Generating Contradiction Captions [weakness 2 and question 2]
While prior works like BLIP2 (https://arxiv.org/abs/2301.12597) and ALIGN (https://arxiv.org/abs/2107.07651) employed hard negative mining at the embeddings layer using vector similarity, our method leverages LLMs.
We appreciate the reviewer pointing out the concurrent work. Notably, it was published in the same month as the NeurIPS submission deadline. While that study emphasizes video-text alignment using verb perturbations, our focus is broader, encompassing transformations of objects, relationships, and attributes.
Finally, our approach extends to synthetic images created by text-to-image models, while previous works focused on natural images or videos.
### Inclusion of More Examples [weakness 3]
Based on your feedback, we've enriched Appendix A.2 with additional examples from (a) instances from SeeTRUE; (b) the VQ$^2$ method; and (c) the ConGen technique. This will underscore the versatility of ConGen in handling both real and synthetic images, as well as the variety of contradictions it can generate.
---
Rebuttal Comment 1.1:
Title: Post rebuttal response
Comment: Thank you for your efforts in answering my questions.
After reading the authors' rebuttal, I have increased my rating to a 6 - Weak Accept | Rebuttal 1:
Rebuttal: We appreciate the reviewers highlighting the comprehensiveness of our SeeTRUE benchmark and its potential contribution to the field (j32y). The noted effectiveness of our visual entailment evaluation and advancements in negative captions generation, particularly in addressing prior benchmark limitations, were recognized (prMg & xP6K). The emphasis on our significant human judgments collection (UX4d) reinforces our dedication to the text-image alignment domain.
Below, we address each reviewer's comments and questions in detail. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Resolving the Tug-of-War: A Separation of Communication and Learning in Federated Learning | Accept (poster) | Summary: This paper proposes a two-layer federated learning (FL) framework (FedSep) by separating the communication and learning parameters. By a bilevel optimization formulation, FedSep enjoys a convergence guarantee. In addition, two settings, communication-efficient FL and model-heterogeneous FL, are solved in FedSep framework.
Strengths: 1. FedSep is an interesting and novel setting.
2. Theoretical convergence analysis is provided, showing the sublinear rate.
3. FedSep is applied to communication-efficient FL and model-heterogeneous FL.
Weaknesses: I have the following concerns and questions:
1. The convergence rate is sublinear, but it does not show any speedup in terms of the clients' number and local steps. So it does not exactly match the convergence of standard FL algorithms. Any possible improvement? Or the statement in the paper should be modified accordingly (e.g., line 75).
2. In the model-heterogeneous FL, the validation data set is also used in the training process both for problem formulation and experiments. I believe this is a concern and unfair for the direct comparison with other algorithms
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comment. Below are responses to your questions:
**Q1: is it possible to show speedup in terms of the clients' number and local steps**
**A**: In Corollary 3.8, we focus on achieving the sub-linear rate. Indeed, it is possible to achieve the linear speedup in terms of the number of clients $M$ and the number of local steps $I$. More specifically, compared to corollary 3.8, we can set $\gamma$ to be a smaller value: $\gamma = \min(\frac{1}{2L},(\frac{1}{C_{\gamma}MIT})^{1/2})$,
then we have
a convergence bound of $O(\frac{1}{(MIT)^{1/2}})$,
which achieves linear speed up w.r.t. $M$ and $I$. Recall that $\gamma$ denotes the learning rate used by the decode stage (Line 7 -12 in Algorithm~1), smaller $\gamma$ leads to more decode steps (larger $I_{dec}$). But since we assume the lower optimization problem $g$ be strongly convex (thus linear convergence), the additional cost is negligible.
**Q2 In the model-heterogeneous FL, the validation data set is also used in the training process both for problem formulation and experiments. I believe this is a concern and unfair for the direct comparison with other algorithms.**
**A**: We want to clarify that all methods (both our FedSep and other baselines) are trained over the same training set. While for our FedSep, we use a small subset of the training data to do the sub-net selection. In other words, the $D_{val}$ in Eq.6 is a subset of the training set in experiments. Therefore, it is a fair comparison with other baselines.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. I have the concerns for the convergence. As suggested in Theorem 3.7 and Corollary 3.8, the algorithm can only go to an error ball with corresponding sublinear rate rather than achieving a stationary point convergence. The authors mentioned that the algorithm can achieve the linear speedup by choosing different learning rates. So what is the best rate it could actually achieve and what is the setting of the hyper-paremeters?
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. In fact, there is a **trade-off** between achieving linear-speed up w.r.t. $I$ and $M$ over controlling the error ball size. We add more details below:
As shown in Line 624 of Appendix, we have the following bound before choosing a specific form of learning rate:
$$ \frac{4\bar{L}h(x_{1})}{T\eta} + \frac{\eta G_2^2}{2Ib_xM} + C_{\eta}I^2\eta^2 + C_{\gamma}\gamma + 2G_1^2 + 2(3\kappa^2L^2 + \hat{L}^2)(1 - \mu\gamma)^{I_{dec}}C_0
$$
**In the original Theorem 3.7**, we choose the learning rates as followings:
$$
\eta = \min\bigg(1, \bigg(\frac{8Ib_xM\bar{L}h(x_{1})}{TG_2^2}\bigg)^{1/2} ,\left(\frac{4\bar{L}h(x_{1})}{C_\eta I^2T}\right)^{1/3}\bigg),\; \gamma = \min\bigg(\frac{1}{2L}, \bigg(\frac{1}{C_{\gamma}T}\bigg)^{1/2}\bigg)
$$
which leads to a bound in the form of:
$$
\frac{4\bar{L}h(x_{1})}{T} + \bigg(\frac{2G_2^2\bar{L}h(x_{1})}{Ib_xMT}\bigg)^{1/2} + \left(\frac{16I^2C_{\eta}\bar{L}^2h(x_{1})^2}{T^2}\right)^{1/3} + \bigg(\frac{C_{\gamma}}{T}\bigg)^{1/2} + 2G_1^2 + 2(3\kappa^2L^2 + \hat{L}^2)(1 - \mu\gamma)^{I_{dec}}C_0
$$
As shown by the bound above, the second and the fourth term are the dominant sub-linear terms, i.e. $\bigg(\frac{2G_2^2\bar{L}h(x_{1})}{Ib_xMT}\bigg)^{1/2} + \bigg(\frac{C_{\gamma}}{T}\bigg)^{1/2}$, where the term $\bigg(\frac{2G_2^2\bar{L}h(x_{1})}{Ib_xMT}\bigg)^{1/2} $ indeed has linear-speed up w.r.t $M$ and $I$, the bottle-neck is $\bigg(\frac{C_{\gamma}}{T}\bigg)^{1/2}$, which only scales w.r.t. $T$.
**To achieve a linear speed up**, we instead reduce the value of $\gamma$ (step size of the decode problem) and choose the learning rates as follows:
$$
\eta = \min\bigg(1, \bigg(\frac{8Ib_xM\bar{L}h(x_{1})}{TG_2^2}\bigg)^{1/2} ,\left(\frac{4\bar{L}h(x_{1})}{C_\eta I^2T}\right)^{1/3}\bigg),\; \gamma = \min\bigg(\frac{1}{2L}, \bigg(\frac{1}{C_{\gamma}IMT}\bigg)^{1/2}\bigg)
$$
Then the dominant sublinear-term is in the form of $\bigg(\frac{2G_2^2\bar{L}h(x_{1})}{Ib_xMT}\bigg)^{1/2} + \bigg(\frac{C_{\gamma}}{IMT}\bigg)^{1/2}$, **thus has a linear speed up w.r.t. both $I$ and $M$**. Meanwhile, note that the error ball term is $2G_1^2 + 2(3\kappa^2L^2 + \hat{L}^2)(1 - \mu\gamma)^{I_{dec}}C_0$, where the second term also depends on $\gamma$, as a result, we then need to choose a large value of $I_{dec}$ to control the value of the error ball, but since the size of the error ball depends on $I_{dec}$ exponentially, we can choose small value of $I_{dec}$ to get satisfactory convergence in practice.
Please let us know if you have any further concerns. | Summary: The authors proposed a two-layer federated learning framework called FedSep, with one layer for communication and another layer for learning. The two layers are connected through decode/encode operations. Furthermore, the authors proposed an efficient algorithm to solve FedSep by treating it as a bilevel optimization problem, and showed convergence results which match those of the standard framework. The FedSep can incorporate Communication-Efficient FL and Heterogeneous-Model FL.
Strengths: (S1) The proposed framework, FedSep, provided a general framework for resolving the tug-of-war by separating the communication and learning of federated learning into two layers.
(S2) The algorithm was explained in detail and the theoretical results are solid. The experiments results supports the claims of the authors well.
Weaknesses: (W) The framework rely on the analyticity and strong convexity of the second level problem, which may not be true in many cases. For example, for problem (equation 4) formulated in Sec 4.1, the analyticity and strong convexity does not seem to hold. Although I still believe some convergence results can be derived.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: (Q) The authors claimed that Theorem 3.7 implies that $\mathbb{E}||\nabla h(x_t)||^2 + \frac{1}{2I}\sum_{i=1}^I \mathbb{E}||\mathbb{E}_{\xi}[\bar{\Delta} \hat{x}_t, i ]||$ converges to zero. Because the right-hand side of the inequality in Theorem 3.7 contains constant, this claim is not straightforward to me, can the authors elaborate?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comment. Below are responses to your concerns and questions:
**W: The framework rely on the analyticity and strong convexity of the second level problem.**
**A**: In the bilevel optimization literature, the Non-convex-strongly-convex assumption (including the smoothness assumptions) is widely used. These assumptions ease the analysis of the bilevel optimization problem: strong convexity of the lower problem makes the hyper-gradient well-defined, as an analytical form of the hyper-gradient requires the Hessian to be positive definite. As for the more general non-convex lower level optimization problem, it is much more challenging to analyze, and we leave the analysis of this case as a future work
**Q: how the left-hand side of Theorem 3.7 converges to 0?**
**A**: One approach to achieve the convergence is choosing diminishing learning rate as stated in Corollary 3.8. In our updated version of the manuscript, we will move this discussion after the Corollary 3.8 to avoid confusion.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. But I still have two questions:
(1) the author does not answer why the (equation 4) formulated in Sec 4.1 satisfies the analyticity and strong convexity.
(2) The are still constants on the right-hand-side of Corollary 3.8.
Therefore, I choose to keep my scores.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comment.
**Q1**: Eq. 4 has an $L_1$ regularization term, thus it is not smooth nor strongly convex. So we cannot use the convergence Theorem 3.7 to predict its convergence. However, we can still use the proximal gradient descent algorithm to perform the decode operation and Eq.5 to perform the encode operation. Empirically (over MNIST and CIFAR-10), we get good performance. As stated in our previous response, the general non-smooth non-strongly-convex lower level optimization problem is much more challenging to analyze, and is an ongoing research topic in bilevel optimization research. We leave the analysis of this case as a future work.
**Q2**: Please refer to the answer to Question 3 in the overall response for the trade-off between encode/decode error and the overall convergence. Furthermore, the constant term $G_1^2 + \kappa^4(1 - \mu\gamma)^{I_{dec}} = \kappa(1 - \tau\mu)^{Q+1}C_f + \kappa^4(1 - \mu\gamma)^{I_{dec}} $, has a logarithm dependence over $Q$ and $I_{dec}$ thus is negligible in pracice. | Summary: This paper asserts that the tasks of communication and learning are at odds in federated learning. As such, a new approach is suggested that separates these tasks. The approach comprises an encode-decode operation, where decoding is cast as an optimization problem. The overall learning task is formulated as a bilevel optimization problem. Standard first-order gradient-based approaches are then employed to solve the bilevel optimization problem. The structure of the proposed algorithm mirrors that of typical FL algorithms. A convergence bound is proven for this algorithm. Some applications pertaining to communication-efficiency and model-heterogeneity in FL are also discussed.
Strengths: - The particular "separation" framework suggested in this paper appears to be novel; I have not seen anything exactly like this in earlier FL work.
- One advantage of the proposed bilevel approach is that each agent can train a separate local model $y^{(m)}$ of its own. This allows for personalization in the face of data-heterogeneity.
Weaknesses: I have several major concerns ranging from the motivation to the utility of the proposed approach. Let me elaborate on them below.
- The entire paper is based on the premise that communication and learning are conflicting goals in federated learning. However, this statement is never formalized at any point in the paper. The only discussion regarding the tension between communication and learning shows up in lines 25-30, which don't convey anything concrete.
- Continuing with the above point, I also missed the motivation for the specific bilevel formulation. By now, there are several approaches that guarantee communication-efficiency in RL (see Refs [R1]-[R5] below) using techniques ranging from quantization, sparsification, compressed sensing, etc. It stands to reason that compressing information about models/gradients will naturally come at the cost of performance in learning. So the tension here is evident, and has been well-explored/quantified in the papers I mentioned earlier. In particular, sending less bits of information slackens the rate of convergence, and one can potentially try to investigate the fundamental limits on the rate needed to achieve a desired level of accuracy; see, for instance, [R5]. However, there is no clear discussion of why the FedSep bilevel idea is any significant improvement over any of these MANY existing schemes.
[R1] Federated learning with compression: Unified analysis and sharp guarantees, Haddadpour et al., AISTATS 21
[R2] Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients, Mitra et al., NeuRIPS 21
[R3] Optimizing the communication-accuracy trade-off in federated learning with rate-distortion theory, Mitchell et al., arXiv 2022
[R4] Communication-Efficient Federated Learning through Importance Sampling, Isik et al., arXiv 2023
[R5] Differentially quantized gradient methods, Yin et al., IEEE Transactions on Information Theory, 2022
- Regarding model-heterogeneity, some of the references I alluded to earlier do account for heterogeneous loss functions across agents. Moreover, if the goal is to allow for personalized models, then why not use any of the existing schemes for personalization in FL (of which, there are many)? See, for instance, ideas based on Moreau Envelopes and MAML in [R6] and [R7], and representation learning in [R8]. If one cares about both personalization and communication-efficiency, I can imagine simply using these ideas in tandem (in meaningful ways).
[R6] Personalized Federated Learning with Moreau Envelopes, Dinh et al, NeuRIPS 2020
[R7] Personalized Federated Learning: A Meta-Learning Approach, Fallah et al., NeuRIPS 2020
[R8] Exploiting Shared Representations for Personalized Federated Learning, Collins et al, ICML 21
The overarching point I am trying to make here is that each of the key considerations in FL that the authors allude to (communication-efficiency, heterogeneity, personalization, etc), have several existing principled algorithmic solutions. It wasn't apparent to me at all why there is a compelling need to depart from these existing approaches.
- In addition to the motivation, several parts of the paper are somewhat vaguely written. For instance, in Eq. (1), the meaning of the object $g^{(m)}$ is not explained. The encoding operator in Eq.~(2) isn't clear to me either. What is this operation and how does it compare with any of the standard encoding techniques (say for instance, standard scalar and vector quantizers, or sparsifying mechanisms like Top-k)? How many bits are needed to perform this encoding? What is the error caused by this encoding? Is this an unbiased encoding mechanism? No intuition is provided at all about any of these crucial points, making it hard to draw meaningful comparisons with existing schemes.
- The difference with existing federated bilevel optimization algorithms in lines 153-157 is also quite terse. The discussion did not come across as anything fundamental. It wasn't clear to me why the bilevel algorithm proposed can't be analyzed by simply adapting ideas from other existing FL bilevel algorithms.
- The main convergence result in Theorem 3.7 is hard to parse. I was left wondering about several key questions: (i) How does the compression scheme (encoding-decoding) affect the rate of convergence? (ii) How does this trade-off compare with the existing known bounds for compression in FL? (iii) How does the effect of heterogeneity in the agents' loss functions manifest in the bounds? Does this dependence match with those known for federated bilevel optimization?
Also, the iteration complexity seems to have a $\kappa^5$ dependence on the condition number $\kappa$. This seems much larger than what one typically obtains. Thus, I am not convinced about the tightness of the bounds either.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: No questions other than the ones above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I couldn't find a clear discussion of the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for spending time reviewing our paper. Below are our responses to your questions:
**A(W1: Formal discussion of communication/learning conflict)**: Please refer to the answer for **Q1 in the overall response.**
**A(W2: Comparison with existing compression FL methods)**: Please refer to the answer for **Q2 in the overall response.**
**A(W3: Comparison with existing personalized FL models)**: Firstly, FedSep provides a general framework to deal with the communication-learning dilemma in FL. [1]-[3] can be viewed as a special case of FedSep when we choose proper decode functions: For the Moreau Envelopes based Personalized FL [1], we set both the decode optimization problem ($h^{(m)}(x)$) and the local training objective ($f^{(m)}(x)$) as a regularized loss function with l2-norm; for the MAML-based method [2], we set the decode problem as one step gradient descent; while the representation learning [3] method corresponds to a degenerated case where the communication parameter (global representation) and learning parameter (client-specific heads) are not connected through a decode problem. Note that [1]-[3] are designed for the data heterogeneity of FL, our FedSep can also incorporate the model-heterogeneity FL as discussed in the Section 4.2. In model-heterogeneous FL, we can choose different scale of models based on the available resources of each individual client.
Next, although various aspects of the communication-learning dilemma have been studied in the literature, these methods are proposed in **isolation**. FedSep provides the first unified analysis framework for them using the theory of bilevel optimization. Furthermore, our FedSep can be used to design novel algorithms as demonstrated by the communication-efficient algorithm and model-heterogenous algorithm in Section 4 of the manuscript.
**A(W4: Discussion of the encoder operation)**: $g^{(m)}$ is the minimization problem solved in the decode operation by the client m;
About the encode operation Eq.2, it is the **inverse operation of the decode operation**. Recall that the decode operation is defined as $y^{(m)}_x = Dec(x)$, where $y^{(m)}_x$ is the minimizer of $g^{(m)}(\cdot, x)$. For the encode operation, it measures how the communication parameter is changed when the minimizer $y^{(m)}_x$ changes, i.e. $\nabla_x y^{(m)}_x = Enc(\cdot)$. Eq.2 shows an explicit form of $\nabla_x y^{(m)}_x$ and can be derived following the implicit function theorem (a standard result in bilevel optimization). In section 4.1, we show an instantiation of the encode/decode operations in the context of communication-efficient FL: the decode function is a LASSO problem that solves the high-dimensional learning parameter from a low dimensional communication parameter with a sparse constraint, while the encode operation is a non-linear mapping as shown in Eq.5 in the manuscript. Finally, The dimension of the communication parameter can be chosen arbitrarily, however, we would see a coding-accuracy trade-off. Please refer to our answer to **Q3 in the overall response** for a more detailed analysis of this trade-off.
**A(W5: Difference with existing federated bilevel optimization algorithms)**: Our Algorithm 1 is much more efficient compared to a naive application of the bilevel optimization algorithm. Suppose we use a standard bilevel optimization algorithm to solve FedSep, then we perform multiple rounds of the following steps 1-4: step 1: we decode the communication parameter to get the learning parameter (solve the inner optimization problem); step 2: we compute the gradient of the learning objective w.r.t. the learning parameter; step 3: we compute the hyper-gradient w.r.t. the communication parameter through the encode operation; step 4: we use gradient descent to update the communication parameter.
This algorithm is actually a direct application of the FedAvg to a bilevel optimization problem, which is inefficient due to performing the decode (step 1) and encode (step 3) operations multiple times. **Our idea is to optimize the learning objective at the learning parameter space**. As shown by Algorithm 1, we perform the following steps: step 1: same as the naive approach; step 2: we perform multiple gradient descent steps to optimize learning objective at the learning parameter space; step 3: we map the learning parameter update back to the communication parameter space through the encode operation; step 4: same as the naive approach. **Compared to the naive approach, we only perform decode/encode operation once.**
The convergence analysis is also challenging. If we follow the naive approach, we can get accurate estimation of the hyper-gradient (gradient w.r.t to the communication parameter), and the convergence analysis is a straightforward application of FedAvg analysis. However, since we perform multiple gradient descent steps at the learning parameter space, we would have a biased hyper-gradient estimation. Then we need to bound this hyper-gradient estimation error carefully (see Lemma C.6 for more details). Finally, in corollary 3.8, we show that the communication parameter $x$ reaches to a stationary point of the bilevel problem (in Eq.1), and the learning parameter $y^{(m)}$ converges to the stationary point of the local learning problem $f^{(m)}(y)$.
**A(W6: Discussion of the convergence results)**: Please refer to the answer for **Q3** in the overall response.
**References**
[1] Personalized Federated Learning with Moreau Envelopes, Dinh et al, NeuRIPS 2020
[2] Personalized Federated Learning: A Meta-Learning Approach, Fallah et al., NeuRIPS 2020
[3] Exploiting Shared Representations for Personalized Federated Learning, Collins et al, ICML 21
---
Rebuttal Comment 1.1:
Title: Further Discussion over the Accuracy-Coding Trade-off
Comment: In the answer to **Q3 of the overall response**, we discuss the trade-off between coding and accuracy, we add more details about it here.
For the **decode** operation, we solve the minimization problem $\underset{y^{(m)} \in \mathbb{R}^{d^{(m)}}}{\arg\min} g^{(m)}(x, y^{(m)})$, while in Algorithm 1 (Line 7-12), we solve this problem with $I_{dec}$ number of steps, therefore, we have a bias term at the order of $(1 - \mu\gamma)^{I_{dec}}$ for the decode operation. Naturally, we can increase $I_{dec}$ to reduce the bias of the decode operation. Then for the **encode** operation, we map the update of the learning parameter back to the communication parameter space following Eq.2. In practice, we evaluate Eq.2 approximately using Eq.3, and Eq.3 is a biased estimation of Eq.2, where we have the variance term and bias term denoted as $G_1$ and $G_2$ respectively. As shown by Proposition 3.5, we can reduce the bias $G_1$ by increasing the value $Q$ in Eq.3 and reduce the variance term $G_2$ by increasing the batch-size $b_x$ (the size of the mini-batch $B_x$ in Eq.3).
The effects of these bias/variance terms are illustrated in Theorem 3.7 (Corollary 3.8). A convergence bound with explicit dependence over $G_1$, $G_2$ and $(1 - \mu\gamma)^{I_{dec}}$ is $O\big(\frac{G_2}{\sqrt{T}}+ \tilde{G}\big)$, where $ \tilde{G} = (1 - \mu\gamma)^{I_{dec}} + G_1$ is the sum of the decoder bias and encoder bias. As shown by this bound, both the bias and variance terms affect the convergence of the algorithm. Furthermore, the bias term decrease exponentially w.r.t. $I_{dec}$ and $Q$, thus small values leads to satisfactory performance in practice. As for the variance term $G_2$, it decrease linearly w.r..t the batch-size $b_x$, which has the same dependence of stochastic noise in FedAvg.
---
Rebuttal Comment 1.2:
Title: Thank you for your response
Comment: I thank the authors for their rebuttal. One of my major concerns about the paper - which I explained in my first set of comments - is the need for a new bilevel framework, given the existence of several communication-efficient algorithms for optimization. Moreover, I asked how the new FedSep approach compares to the already known bounds for such algorithms.
I agree that the FedSep approach offers a potentially novel perspective to encoding-decoding information in FL. However, it is still not clear to me whether the performance of FedSep is any better than what is already known. Let me elaborate.
- Consider the paper "Differentially quantized gradient methods, Yin et al., IEEE Transactions on Information Theory, 2022" that I pointed to in my initial set of comments. For smooth and strongly convex optimization (without noise), this paper proposes a quantized gradient-descent algorithm that can retain the exact same linear convergence rate as without quantization, when sufficient bits are used for encoding. The number of such bits is also shown to be tight in this work. So for deterministic optimization, this paper provides a clear benchmark for comparison.
- Similarly, for stochastic optimization, the paper "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication" shows that by using error-feedback, one can retain nearly the same rates for sparsified SGD variants as without sparsification. Essentially, the effect of compression is relegated to higher order terms that are asymptotically negligible. Several subsequent variants have been developed for the multi-agent case.
So in short, any claims of usefulness of FedSep in terms of communication-learning trade-offs should be carefully contrasted with the results in these papers. Without a precise comparison revealing that FedSep performs no worse than these other schemes, I remain unconvinced of its utility.
- In terms of personalization, I am a bit confused. The authors mentioned that all the prior approaches for personalization using Moreau envelopes and MAML can be encompassed within their broader unified framework. If that is the case, then the main results in these papers should fall out as special cases of the main convergence result for FedSep. Can the authors argue that this is indeed the case?
For instance, consider the two papers below:
[1] Personalized Federated Learning with Moreau Envelopes, Dinh et al, NeuRIPS 2020
[2] Personalized Federated Learning: A Meta-Learning Approach, Fallah et al., NeuRIPS 2020
For suitably chosen encoding-decoding functions, can the authors show explicitly that the results in the above papers are special cases of Theorem 3.7.?
---
Reply to Comment 1.2.1:
Comment: Thanks for your reply. Below is our response to your further comments.
**Comparison with Communication-efficient FL works**. Since we study the convergence of non-convex functions, we can compare with the results in the second paper [3] you mentioned. More specifically, the rate achieved by [3] is $O(\frac{1}{\delta T} + \frac{\sigma}{\sqrt{T}})$, where $\delta$ is the bias of compressor and $\sigma$ is the variance of stochastic gradient estimation. As for our FedSep, we have convergence rate of $O(\frac{G_2}{\sqrt{T}} + G_1^2 + \kappa^4(1 - \mu\gamma)^{I_{dec}})$, where $G_2 = O(\frac{1}{b_x})$ ($b_x$ is the batch-size, and see Proposition C.3 for detailed expression) is the variance term related to encode operation, $G_1= \kappa(1 - \tau\mu)^{Q+1}C_f$ is the bias term related to the encode operation and $\kappa^4(1 - \mu\gamma)^{I_{dec}}$ is bias term related to the decode operation. So, our FedSep gets **NO WORSE RESULTS** than in [3]: the bias term does not show up in the dominant term, and only the variance appears in the dominant term. **Meanwhile, our FedSep does not involve any error-feedback steps, and thus is stateless (a desired property for FL)**
**Comparison with other Personalized FL works**. We make a thorough comparison between our FedSep and the pFedMe [1] framework below. *As for the Per-FedAvg[2], as argued in Eq. 5 of [1], the problem formulation studied by [2] considers a first-order approximation of the problem studied in [1], so the comparison can be extended to Per-FedAvg straightforwardly.*
**Compare with pFedMe**[1]:
* **Formulation**: Our FedSep **recovers** the formulation of pFedMe by setting the communication parameter as $\omega$, the learning parameter as $\theta_i$ on the $i_{th}$ client, the decoding function (the lower level problem g in our framework, Eq. 1) as $f_i(\theta_i) +\frac{\lambda}{2}\|\theta_i - \omega\|^2$ and the learning problem (the upper level problem f in our framework, Eq. 1) $f_i(\theta_i(\omega)) +\frac{\lambda}{2}\|\theta_i(\omega) - \omega\|^2$, where $\theta_i(\omega)$ is the minimizer of the decode problem. A special characteristic of this formulation is that its decode problem and learning problem are the **SAME**.
* **Algorithm**: Translate Algorithm 1 into the language of our framework, it performs multiple rounds of decode (solving the Moreau Envelop problem), encode(Compute the hyper-gradient of the learning problem w.r.t. $\omega$, i.e. $\lambda(\theta_i(\omega) - \omega)$) operations per global round, due to the special structure of its formulation: the learning problem is the same as the decode problem, there is no need to solve the learning problem after the decode operation. In comparison, our algorithm is designed for the general setting (the learning problem is the different from the decode problem), and performs the decode/encode operation once and solve the learning problem with multiple steps. For the case where encode/decode operations are expensive, such as in the communication-efficient FL, our algorithm is more computation-efficient.
* **Convergence Result**: Due to the differences between algorithm and specific details of analysis, our convergence theorems are not exactly the same. However, there are some important **observations**. **First**, the authors claim (in Theorem 2 of [1]) that pFedMe algorithm has convergence rate of $1/(TRN)^{2/3}$ which outperforms the standard single level FL algorithm, but it also has an **irreducible constant noise term $\sigma_{F,2}^2/\lambda^2$**, in contrast, we recover the convergence rate of standard single level FL algorithm and does not have this constant noise term. **Second**, note that in Theorem 2 of [1], its has an error ball of size $O(\delta^2)$ (error of solving Moreau Envelop problem), this corresponds to the the bias term of decode operation $\kappa^4(1 - \mu\gamma)^{I_{dec}}$ in Theorem 3.7; due to the special formulation (learning problem is the same as decode problem in pFedMe), the encode operation is unbiased, thus Theorem 2 of [1] does not have a dependence over the bias of encode operation. For the more general setting where the learning and decode problem is not the same, we show in our Theorem 3.7 that there is also an additive dependence over encode operation bias $G_1$.
**References**
[1] Personalized Federated Learning with Moreau Envelopes, Dinh et al, NeuRIPS 2020
[2] Personalized Federated Learning: A Meta-Learning Approach, Fallah et al., NeuRIPS 2020
[3] Stich, Sebastian U., and Sai Praneeth Karimireddy. "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication." arXiv preprint arXiv:1909.05350 (2019). | Summary: The paper uses bilevel optimization in a federated learning setting. A unique decomposition of communication and learning has been identified, which has applications for reducing communication overhead and supporting heterogeneous models. The theoretical convergence guarantee has been presented and experiments also show the advantage of the proposed method.
Strengths: - The use of bilevel optimization as a decomposition into communication and learning in a FL setting is interesting.
- The framework is general and supports multiple application scenarios of FL.
- Theoretical convergence bound is provided.
Weaknesses: - The use of encoder/decoder structure may incur additional computation compared to common FL algorithms. It would be nice to quantify the amount of such additional computation.
- The extension of common bilevel optimization algorithms to include multiple update steps (as pointed out at the end of page 4) seems to be somewhat similar to the idea of local updates in FedAvg / local SGD algorithms. It would be nice to identify what are the key technical challenges and novel solution techniques in this extension.
- There is space for improvement in the experiments (see details below).
- The writing could be improved to highlight the usefulness of encoder/decoder in the context of FL, in early parts of the paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the additional computation overhead compared to standard FL algorithms such as FedAvg?
- What are the key technical (mathematical) challenges and novelties?
- In the experiments in Section 5.1, biased compressors such as top-K usually only works well with error feedback [a]. Is error feedback used in the baseline methods? In addition, are the first and third plots in Figure 2 results at convergence? Theoretically, all methods (when using error feedback in baselines) should converge to the optimal solution when the experiments are run long enough. More explanation is needed on why a much worse accuracy is obtained when the compression rate is high. Is it a matter of hyper parameter tuning? It may be better to plot the accuracy against the amount of communication (e.g., in the number of bytes transmitted, *not* in communication rounds), to compare the accuracies of different methods and different compression rates at the same amount of communication.
- In Section 5.2, the advantage of having heterogeneous models is not quite clear.
[a] Stich, Sebastian U., and Sai Praneeth Karimireddy. "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication." arXiv preprint arXiv:1909.05350 (2019).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for spending time reviewing our paper. Below are our responses to your questions:
Response to Concerns:
**A(W1: Additional computational cost)**: Compared to FedAvg, FedSep needs to perform extra decode and encode operations. For the decode operation, we need to perform $I_{dec}$ of gradient descent steps to optimize the decode problem (Line 7-12 of Algorithm 1), while for the encode operation, we need to perform $Q$ Hessian-vector products (Eq.3). Since the estimation error of both decode and encode operations have linear convergence, we can choose small values of $I_{dec}$ and $Q$ in practice, thus is negligible. Furthermore, to incorporate the communication-learning trade-offs, most existing approaches needs some extra encode/decode operations. For example, FetchSGD [1] is a communication-efficient FL method and uses Count-sketch to compress the local updates before communicating with the server.
**A(W2: Technical novelty and difficulty)**: Our Algorithm 1 is much more efficient compared to a naive application of the bilevel optimization algorithm. Suppose we use a standard bilevel optimization algorithm to solve FedSep, then we perform multiple rounds of the following steps 1-4: step 1: we decode the communication parameter to get the learning parameter (solve the inner optimization problem); step 2: we compute the gradient of the learning objective w.r.t. the learning parameter; step 3: we compute the hyper-gradient w.r.t. the communication parameter through the encode operation; step 4: we use gradient descent to update the communication parameter.
This algorithm is actually a direct application of the FedAvg to a bilevel optimization problem, which is inefficient due to performing the decode (step 1) and encode (step 3) operations multiple times. **Our idea is to optimize the learning objective at the learning parameter space**. As shown by Algorithm 1, we perform the following steps: step 1: same as the naive approach; step 2: we perform multiple gradient descent steps to optimize learning objective at the learning parameter space; step 3: we map the learning parameter update back to the communication parameter space through the encode operation; step 4: same as the naive approach. **Compared to the naive approach, we only perform decode/encode operation once.**
The convergence analysis is also challenging. If we follow the naive approach, we can get accurate estimation of the hyper-gradient (gradient w.r.t to the communication parameter), and the convergence analysis is a straightforward application of FedAvg analysis. However, since we perform multiple gradient descent steps at the learning parameter space, we would have a biased hyper-gradient estimation. Then we need to bound this hyper-gradient estimation error carefully (see Lemma C.6 for more details). Finally, in corollary 3.8, we show that the communication parameter $x$ reaches to a stationary point of the bilevel problem (in Eq.1), and the learning parameter $y^{(m)}$ converges to the stationary point of the local learning problem $f^{(m)}(y)$.
**A(W3: Experimental setting)**: See our response to **Q3** below.
**A(W4: Usefulness of encoder/decode)**: The encoder/decoder are bridges to connect the communication parameter and learning parameter. Selecting the appropriate decode/encode operation is central to applying FedSep in addressing various challenges within Federated Learning (FL). As shown by the communication-efficient FL task and model-heterogenous task.
Response to Questions:
**A(Q1: Additional computational cost)**: Please see response to the **W1** above.
**A(Q2: Technical novelty and difficulty)**: Please see response to the **W2** above.
**A(Q3: Experimental setting)**: We include the error feedback for Top-K method.
We compare our FedSep with other baselines using the following schema in the first and third plot of Figure 2: First, we run the uncompressed baseline with a given number of global epochs (until convergence), and we denote its communication budget as $P$. Note that the uncompressed method has a compression rate of 1. **For other methods, we vary hyper-parameters of each method under a given compression rate and report the best accuracy**. Take the Top-K method as an example, under the compression rate 10, we run Top-K with different values of K until reaching the communication budget of $0.1P$, and report the best accuracy achieved.
In summary, the compression rate is defined by the ratio between the amount of bits transmitted with the total amount of communication budget ($P$). By using the this comparison scheme, we can **compare the rate limit of each method under a given communication budget**, note that [1] also uses a similar scheme for comparison. Furthermore, by reading the first and third plot of Figure 2 from right to left, we can see that the accuracy indeed increases when the amount of communication increases.
**A(Q4: Advantages of having heterogeneous models)**: Training heterogeneous models is used to incorporate the different resource limit of clients, some clients have the capacity to train large model, while other clients can only train a small model. In particular, we implement one way of achieving heterogeneous model with FedSep: data-driven sub-network selection. As shown by Table 1, our FedSep outperform other sub-network selection methods, and achieve comparable accuracy with the Homogeneous (large) baseline, where all clients train the full-size model.
**References**
[1] D. Rothchild, A. Panda, E. Ullah, N. Ivkin, I. Stoica, V. Braverman, J. Gonzalez, and R. Arora.453
Fetchsgd: Communication-efficient federated learning with sketching. In International Confer-454
ence on Machine Learning, pages 8253–8265. PMLR, 2020.
---
Rebuttal Comment 1.1:
Title: Additional Experiments: Accuracy w.r.t. The amount of Communication
Comment: In the second and the fourth plots of Figure 2, we show the Test accuracy w.r.t. the communication rounds when we transmit different number of parameters. As suggested by the reviewer, we also show the **Test accuracy w.r.t. the Total communication** for our FedSep below.
Table 1 shows the results for the I.I.D case, while Table 2 shows the case of Non. I.I.D. **Note** that we denote one communication unit as transferring bits that equals to the size of the full model. Each column represents a different amount of communication, and each row represents a different choice of the number of parameters transferred per round.
**Table 1**: Test Accuracy w.r.t. the Total Amount of Communication (I.I.D)
| Num. Params. / Comm. | 5 | 20 | 30 | 40 | 50 |
|-------|--------|--------|--------|--------|--------|
| 20000 | 0.513 | 0.905 | 0.933 | 0.951 | **0.969** |
| 10000 | 0.434 | 0.914 | **0.945** | **0.966** | 0.967 |
| 5000 | 0.375 | **0.922** | 0.941 | 0.956 | 0.961 |
| 2000 | 0.353 | 0.896 | 0.936 | 0.943 | 0.956 |
| 500 | **0.627** | 0.878 | 0.880 | 0.886 | 0.886 |
**Table 2**: Test Accuracy w.r.t. the Total Amount of Communication (Non. I.I.D)
| Num. Params. / Comm. | 5 | 20 | 30 | 40 | 50 |
|-------|-------|-------|-------|-------|-------|
| 20000 | 0.262 | 0.820 | 0.862 | 0.901 | 0.919 |
| 10000 | 0.183 | 0.847 | 0.886 | **0.917** | **0.923** |
| 5000 | 0.226 | **0.862** | **0.894** | 0.916 | 0.920 |
| 2000 | 0.217 | 0.811 | 0.848 | 0.862 | 0.881 |
| 500 | **0.356** | 0.696 | 0.754 | 0.731 | 0.740 |
From the tables, we observe that: when given a low communication budget, we get the best accuracy by transferring smaller number of parameters (high parameter compression rate). As we increase the budget, we get better accuracy by transferring more parameters per round. **Intuitively**, Take more rounds of training tend to get a higher accuracy, while transferring smaller number of parameters leads to more rounds of training for a given amount of communication budget.
---
Rebuttal Comment 1.2:
Title: Not clear whether error-feedback has been used in the top-K baseline
Comment: Thanks for the response. It remains unclear whether error-feedback has been used in the top-K baseline in the experiments. This is an important point, because top-K can have much better performance with error-feedback compared to without error-feedback, since it is a biased compressor. A related point has also been mentioned in Reviewer Dwjg's latest comment (on Aug. 15). There exist several papers on this topic. For example, a related work that applies error-feedback top-K on both the client and server is Tang et al., DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression, ICML 2019.
I would suggest the authors to compare with error-feedback top-K, if the current top-K implementation in the baseline does not include error-feedback.
---
Reply to Comment 1.2.1:
Comment: Thanks for your comment! We carefully check the code implementation. For topK baseline, we use the standard error-feedback schema as shown in the Algorithm 1 of [1]. **However, we find that we add the error-feedback term to the gradient instead of the learning rate times gradient**, (See Line 3 of Algorithm 1 of [1]) this is equivalent to add a shrinking coefficient (equals to the learning rate) to the error feedback, which weakens the benefit of using error-feedback. Indeed, its performance is close to the vanilla Top-K methods.
In Table-1 and Table-2 below, we show the corrected Top-K results for I.I.D and Non I.I.D MNIST dataset (First and Third Plots in Figure 2). As shown by results, the accuracy improves a lot especially, for the Non I.I.D case. However, we still notice that Top-K diverges under high parameter compression rate. In particular, when we set $K=1500$, it diverges under both I.I.D and Non I.I.D settings. In contrast, our FedSep still gets good performance with 71\% accuracy for the Non I.I.D setting after 2000 rounds. In our final version, we will also correct the results for the CIFAR-10 dataset.
Table 1: Accuracy vs Compression Rate for I.I.D MNIST
| Compression Rate | 50 | 20 | 10 | 5 |
|-------|-------|-------|-------|-------|
|FedSep | **0.9438**| **0.9792**| 0.9820| 0.9848|
|Count-Sketch | 0.9199| 0.9639| 0.9755| 0.9852|
|Top-K | 0.9354| 0.9667| **0.9888**| **0.9875**|
Table 2: Accuracy vs Compression Rate for Non. I.I.D MNIST
| Compression Rate | 50 | 20 | 10 | 5 |
|-------|-------|-------|-------|-------|
|FedSep | **0.8930**| **0.9318**| 0.9337| 0.9044|
|Count-Sketch | 0.5966| 0.9186| 0.9413| **0.9659**|
|Top-K | 0.8810| 0.9235| **0.9443**| 0.9428|
References
[1] Stich, Sebastian U., Jean-Baptiste Cordonnier, and Martin Jaggi. "Sparsified SGD with memory." Advances in Neural Information Processing Systems 31 (2018). | Rebuttal 1:
Rebuttal: Thanks all the reviewers for their time and effort. Below we provide responses to questions related to the problem formulation, motivation and interpretation of the convergence theorem:
**Q1: Can you formalize the conflict between communication and learning?**
**A**: Formally, the conflicts between communication and learning can be expressed by the following federated optimization problem with constraints: $\underset{x}{\min } h(x) \coloneqq \frac{1}{M}\sum_{m=1}^M L(x; D^{(m)}), \text{s.t. } \text{Comm}(x) \leq C_1, \text{Cap}(x) \leq C_2$. $L(x; D^{(m)})$ denotes the training objective: we fit a model parameterized with $x$ over the dataset $D^{(m)}$, $L$ denotes a loss objective. For the constraints, $\text{Comm}(\cdot)$ denotes the communication constraint, e.g. the bit-rates transferred between the server and the client is upper bounded, while $\text{Cap}(\cdot)$ denotes the computation capacity constraint, e.g. GPU memory size of clients constrains the largest model that can be trained. For simplicity, we assume both $\text{Comm}(\cdot)$ and $\text{Cap}(\cdot)$ only depends on the dimension of the parameter $x$. It is straightforward to identify the conflict between communication and learning from this formulation: Firstly, to satisfy the communication constraint, we need to set $x$ in a low-dimensional space, however, a high-dimensional parameter is desired for optimizing the objective $L(x; D^{(m)})$, furthermore, the capacity limit $C_2$ is decided by the smallest capacity of all clients, i.e. $C_2 = min (C^{(m)}_2, m\in [M])$, where $C^{(m)}_2$ denotes the capacity constraint of client $m$. As a result, clients with larger capacity are under-utilized. One intuitive approach to mitigate the above mentioned issues is by using different sets of parameters to satisfy the two constraints, which is indeed the communication parameter and the learning parameter proposed in the FedSep framework.
**Q2: Why do we need FedSep if several approaches exists for the communication-efficient FL.**
**A**: Compared to existing works in the literature, our FedSep framework (with a bilevel formulation) **provides a novel and different perspective** to tackle the communication-efficiency challenge in FL.
More specifically, existing works often view compression operations (such as quantization and sparsification) as introducing errors into the optimization process, as a result, they either employ error feedback techniques for corrections or leverage information theory tools to determine the optimal encoding strategy within communication constraints. In contrast, FedSep distinguishes between communication parameters and learning parameters. Rather than focusing on compression error analysis as existing approaches, we perform convergence analysis through the lens of bilevel optimization.
As shown in the convergence analysis (Theorem 3.7 and Corollary 3.8), we have that the communication parameter $x$ reaches to a stationary point of the bilevel problem (in Eq.1), and the learning parameter $y^{(m)}$ converges to the stationary point of the local learning problem $f^{(m)}(y)$. In the context of communication-efficient FL, we choose the decode function $g^{(m)}$ to be identical across all clients, therefore, the learning parameter $y^{(m)}$ is also identical across clients. Combining with the convergence result, we show that $y \coloneqq y^{(m)}, m \in [M]$ is a stationary point of the classical FL optimization problem $\frac{1}{M}\sum_{m=1}^M f^{(m)}(y)$. To the best of our knowledge, this is the first convergence result for communication parameter (equivalently, the compressed parameter in the literature).
**Q3: How to interpret the convergence result in Theorem 3.7**
**A**: Firstly, **FedSep shows a Coding-Accuracy trade-off**. As shown in Theorem 3.7 (Corollary 3.8), the encode/decode operation affects the convergence in two folds: firstly, $G_2$, the variance of the encode operator, is a coefficient of the $\frac{1}{\sqrt{T}}$ term; $\tilde{G}$, the bias of encode/decode operations, is an additive term. To control the variance term, we can increase the batch-size $b_x$, and to control the bias term, we can increase the number of decode steps $I_{dec}$ and the number of encode steps $Q$. Since the bias error of both decode and encode operations have linear convergence, we can choose small values of $I_{dec}$ and $Q$ in practice. This is consistent with the Accuracy-Communication trade-off in the compression FL literature. In fact, higher communication rate makes the decode minimization problem more ill-posed (e.g. the decode function in Eq.4 in the manuscript), thus needs more decode/encode steps to reach a given error.
Next, our FedSep matches the standard $O(\epsilon^{-2})$ rate as compression FL [1] for the non-convex function. For the encoding/decoding related dependence: [1] has a linear dependence over the compression error, while our FedSep has a linear dependence over the variance term $G_2$ and an additive dependence of the bias term $\tilde{G}$.
Finally, the convergence rate of FedSep matches that of the federated bilevel optimization method FedNest [2] with a rate of $O(\kappa^5\epsilon^{-2})$. Note that the $\kappa$ is the condition number of the decode function $g^{(m)}$, which is different from the condition number of a strongly-convex objective in classical FL. The convergence bound does not rely on the heterogeneity of client's loss function, as we choose the global learning rate $\eta_g = O(\frac{1}{I})$ to control the error caused by local updates.
**References**
[1] Federated learning with compression: Unified analysis and sharp guarantees, Haddadpour et al., AISTATS 21
[2] Tarzanagh, Davoud Ataee, et al. "Fednest: Federated bilevel, minimax, and compositional optimization." International Conference on Machine Learning. PMLR, 2022. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Active Vision Reinforcement Learning under Limited Visual Observability | Accept (poster) | Summary: This paper studies an interesting setting where an RL agent needs to simultaneously decide how to act in the motor space and how to act to change the 2D observation space. Under this setting, the authors propose a new framework named SUGARL, mainly consisting of three technical contributions:
- a two-branch joint learning framework;
- a definition of the sensorimotor reward using inverse dynamics model
- a technique to accumulate history observations named Persistence-of-Vision Memory
The authors conduct experiments on Atari with DQN and on DMControl with DrQv2, to show the effectiveness of their framework compared to Random View and Raster Scanning baselines.
Strengths: 1. An interetsting problem named Active RL is reformulated and well-studied. This problem has its real-world meaning and could lead to possibly large impact for the real-world applications of Visual RL.
2. To drive for the better learning of the sensorimotor policy, a new intrinsic reward is defined based on IDM, which is well motivated and also effective in experiments.
3. The evaluation looks extensive, conducted on two kinds of environments with both continous and discrete action space, and every component is well ablated.
4. The learned sensorimotor policy gives relatively interesting visualizations, where the task-oriented objects are more focused.
Weaknesses: 1. **Environments are not matching the problem setting.** Although the proposed setting and the motivation example (i.e., robotic manipulation) are interesting to me, my major concern is the mismatch between the proposed setting and the experiment environments, where authors only focus on the 2D RL environments (DMC and Atari). Recent years have seen great advances in robotic simulation environments (e.g, MetaWorld [1], RoboSuite [2], ManiSkill [3], RLBench [4], ...), and it would be good to see the active RL setting is studied on these robotic environments, instead of the toy 2D environments such as video game.
2. **Baselines are weak.** The presented experiment results show that SUGARL outperforms Random View and Raster Scanning, which are two random weak baselines that could not show the advantage of the proposed framework. It would be good if authors could devise some stronger baseline (such as using a pre-trained detection network [5] to get the target, or using some hand-crafted features like color and shape to get the target) and compare.
[1] Yu, Tianhe, et al. "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning." *Conference on robot learning*. PMLR, 2020.
[2] Zhu, Yuke, et al. "robosuite: A modular simulation framework and benchmark for robot learning." *arXiv preprint arXiv:2009.12293* (2020).
[3] Mu, Tongzhou, et al. "Maniskill: Generalizable manipulation skill benchmark with large-scale demonstrations." *arXiv preprint arXiv:2107.14483* (2021).
[4] James, Stephen, et al. "Rlbench: The robot learning benchmark & learning environment." *IEEE Robotics and Automation Letters* 5.2 (2020): 3019-3026.
[5] https://github.com/facebookresearch/detectron2
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see `weaknesses` for questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the environments are not fully mentioned in the conclusion section, which however are my major concern in this work. Other parts are good.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions. Please find our response below.
**1\. Experiments on robotics tasks**
Following the suggestion, we conducted new experiments on Robosuite [Zhu et al., 2020] and found that SUGARL performs best, especially on harder manipulator tasks. Please see our general response and Table 1 in our rebuttal PDF.
**2\. Baselines**
We follow the suggestion and introduce more SOTA baselines. Please also refer to our general response and Table 1 in our rebuttal PDF for details. We implement SOTA baselines including (a) DrQ with object detection using pre-trained DETR [Carion et al., 2020], and (b) DrQ with learned attention [Tang et al., 2020]. The former is our replication of a relevant approach from [Cheng et al., 2018], with a stronger object detection module and a stronger RL algorithm. We find that SUGARL outperforms both baselines by a large margin.
**3\. Discussion on limitations**
Following the suggestion, we provide a limitation section which we will incorporate to the revised paper.
In this work, we assume that completely independent sensory and motor actions are present in an embodied agent. But in a real-world case, the movement of the sensors may depend on the motor actions. For example, a fixed camera attached to the end-effector of a robot manipulator, or to a mobility robot. To address the potential dependence and conflicts between two policies in this case, extensions like voting or weighing across two actions to decide the final action may be required. The proposed algorithm also assumes a chance to adjust viewpoints at every step. This could be challenging for applications where the operational or latency costs for adjusting the sensors are high like remote control. To resolve this, additional penalties on sensory action and larger memorization capability are potentially needed. Last, the intrinsic reward currently only considers the accuracy of agent-centric prediction. Other incentives like gathering novel information or prediction accuracy over other objects in the environment can be further explored.
We appreciate your consideration of our clarifications.
**References**
- Carion et al., End-to-End Object Detection with Transformers, ECCV 2020
- Tang et al., Neuroevolution of Self-Interpretable Agents
- Cheng et al., Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions, CoRL 2018
- Zhu et al., robosuite: A modular simulation framework and benchmark for robot learning, 2020
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing new results. Here are my questions to the authors' replies.
**Q1:**"Active RL agent is allowed to control the camera." Which camera is controlled?
**Q2:** Could the authors show the visualization (e.g., videos about how the camera is changed by the policy) of the senor policy in robotic environments, like the videos originally provided?
**Q3:** In the implemented stronger baseline, i.e., DrQ with DETR, is the bounding box provided as additional information, or the images cropped by the bounding box provided as additional information? I hope authors could explain more on the implementation details.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply!
**1\. Which camera is controlled**
We initialize the movable camera at the location of one of the hand-coded "side view" from robosuite. It is on the left of the robot.
We also report the performance of standard RL using this "side view" in the below table. The performance is lower than standard RL in other hand-coded views (results in the rebuttal PDF), showing that learning from this view is challenging.
| | Wipe | Door | NutAssenblySquare | Lift | Stack |
|-----------|:----:|:-----:|:-----------------:|:----:|:-----:|
| Side view | 25.9 | 136.2 | 34.5 | 56.6 | 12.8 |
**2\. Visualization**
Sure. Please check our video at https://drive.google.com/file/d/1Ay-ZdJMF3ekjNe1sa7Gt9fqMgtxb-itb/view?usp=sharing
**3\. DrQ with DETR baseline**
We provide the 3D positions of the detected objects, inferred from bounding boxes (from DETR) and camera pose (from the environment), replicating the design in [Cheng et al., 2018]. | Summary: The paper introduces an RL setting where an agent take actions to simultaneously perform a task and control its vision. The authors design an algorithm for learning two policies for task performing and vision controlling. They test the algorithm on two domains: Atari and DMC. Results show that the proposed algorithm works better than baselines that do not intentionally control the vision. The authors conduct a comprehensive study to analyze the behavior of the algorithm.
Strengths: * The proposed setting is interesting. While it is not clear in the paper, I can envision potential significant impact.
* The paper is very well-written. The methods are clearly presented and the results are thoroughly analyzed.
* The experiment analyses are thoughtful and comprehensive. The behavior of the algorithm is dissected extensively, bringing valuable insights to the readers.
Weaknesses: My concerns are about the high-level framing of the paper and the lack of comparison with highly relevant baselines:
* The motivation of the setting is still unconvincing in terms of practicability. Scientifically, it is interesting to replicate human abilities to intentionally adapt their field of vision, but it is unclear why acquiring the same abilities would be pragmatically beneficial for an AI agent, compared with (i) observing all information or (ii) end-to-end-learned attentions.
* The paper seems to be missing a notion of "cost" (e.g., time budget, cognitive effort, amount of information), which makes it easier to motivate and compare different methods. For example, in figure 5(b), the method outperforms observing the full state, but it is unclear whether the gain is due to observing more pixels in general or having the ability to control the vision. A more fair comparison would fix a budget of pixels (or some other type of cost) for all methods.
* The authors should include baselines with end-to-end-learned attention. Without any cost of gathering new information, one can design a baseline that scans the full state, and applies attention on top of that. With this baseline, I am currently not convinced why learning attention through RL would be more effective than through end-to-end optimization.
* I suggest changing the framework to "Active*Vision*-RL", which fits better to the scope of the paper.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I do not have any questions.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is no limitations section. I suggest discussing the limitations of the simulated environments in modeling real-world scenarios, the drawbacks of RL in the face of rich information and action spaces.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions. We will address the concerns below.
**1\. The necessity of active perception**
We thank the reviewer for the constructive comments. In order to highlight the necessity of active perception and effectiveness of our approach.
- We ran new experiments on Robosuite [Zhu et al., 2020] where an active perception is naturally beneficial for manipulation. Our results show that active agents are outperforming hand-coded views most of the time. Please refer to our general response for the details of the experiments.
- We also compared our proposed approach with baselines with (i) observing all information (using an object detector) or (ii) end-to-end-learned attentions. SUGARL outperforms them. Please also refer to Tables 1 and 3 in our rebuttal PDF for detailed results. For the attention-based baseline, we notice there are several possible approaches for RL like [James et al., 2022, Tang et al., 2020, Wu et al., 2021]. We choose [Tang et al., 2020] because it is differentiable and with only a small amount of computation overhead for fair comparison.
**2\. Pixel budget**
Please see the table below for comparison on the number of observed pixels versus performance under Atari. In Atari and DMC, the number of actual pixels observed by the full observation model is 84x84=7056. For the foveal only settings, SUGARL uses 3x20x20=1200 (17.0%), 3x30x30=2700 (38.3%), 3x50x50=7500 (106%) pixels **at most**, where 3 is the introduced PVM length, and 20, 30, 50 are the foveal resolutions. We note that there are overlaps (because of stitching) between old and new observations, so the actual numbers will be smaller than those, especially for the 50x50 case. SUGARL models have lower pixel budgets than the full observation model in general. In Robosuite, all methods have the same pixel budget. We will add these comparisons to the paper.
| Visual Settings | # Observed Pixels | Percentage of a Full Obs. | Performance on Atari |
|---------------------------------|---------------------|:--------------------------:|:----------------------:|
| 20x20 foveal only | <20x20x3=1200 | <17.0% | 0.475 |
| 30x30 foveal only | <30x30x3=2700 | <38.3% | 0.810 |
| 50x50 foveal only | <50x50x3=7500 | <106% | 0.805 |
| 20x20 foveal + 20x20 peripheral | <20x20x3+20x20=1600 | <22.7% | 1.010 |
| 30x30 foveal + 20x20 peripheral | <30x30x3+20x20=3100 | <43.9% | 1.116 |
| 50x50 foveal + 20x20 peripheral | <50x50x3+20x20=7900 | <112% | 1.289 |
| Full | 84x84=7056 | 100% | 1.000 |
**3\. Term ActiveVision-RL**
Thanks for the suggestion, we will follow it.
**References**
- James et al., Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation, RA-L 2022
- Tang et al., Neuroevolution of Self-Interpretable Agents, GECCO 2020
- Wu et al., Self-Supervised Attention-Aware Reinforcement Learning, AAAI 2021
- Zhu et al., robosuite: A modular simulation framework and benchmark for robot learning, 2020
---
Rebuttal Comment 1.1:
Title: Thanks for your repsonse
Comment: I appreciate the additional experiments. I decide to raise my score to 7. I hope the authors will incorporate the rebuttal materials into the next version. | Summary: The authors develop an RL framework for problems where the agent must control its perception in order to solve the task. Their framework, SUGARL, decouples the task into a sensory and motory policy, and propose a heuristic reward for training the sensory policy stably. They evaluate in Active Vision versions of Atari and DMC.
---
(8/14/23) The rebuttal has addressed my major concern, which is showing SUGARL results in more realistic robotics tasks in 3d scenes. I also appreciate the additional baselines the authors added in.
Updating my score from 3 to a 6.
I would recommend the authors to focus their time now on making the paper easier to read, as well as performing experiments on two more realistic robotic settings - a setting where moving the camera has a cost, and a setting where there are physical occluders.
Strengths: - An RL framework that sensibly handles active perception by decoupling the overall task into sensory and motor tasks.
- The authors propose a heuristic dense reward for the sensory policy that seems to work well in their tasks, although I have some questions about its generality. Nevertheless, I appreciate its simplicity and effectiveness.
- Interesting and diverse experiments, although experiments are in rather artificial active vision settings (see concerns).
- Sometimes, Active-RL can even outperform Full Observation baseline with joint training of the active vision policy. This is impressive, since training both active vision and motor policies from scratch can easily destabilize the RL agent.
- I like the qualitative analysis of the sensory policy behavior.
Weaknesses: ## Major
**Experimental section does not evaluate active perception in natural settings.**
- There is an obvious omission of tasks in natural domains where active perception is necessary. All tasks are currently done in a modified version of Atari or DMC where the agent can instantly change the view per timestep. In more realistic settings, such as robotics, changing the sensors have costs - energy cost, delays, etc. These constraints are not mentioned or studied at all.
- Neurips is a more theoretical conference, but this paper focuses on solving Active RL problems, which is mainly a robotics problem. So while the Atari/DMC benchmark adds some scientific value, the experimental section needs a more realistic Active RL benchmark to show practical value. The authors should show results on a robotics task, like manipulation or navigation as done in related work [1,2]. I am also open to discussing this point, but the authors would need to convince me that the Atari/DMC tasks are somehow reflective of actual Active-RL applications.
This is the most pressing issue for me, and I believe if addressed, will make this paper very compelling. If SUGARL can show good trends (outperform or competitive with full view baseline) in realistic applications, it will be a good method since joint training the sensory and motor policies together is nontrivial.
## Moderate
**Experiment section is messy.** It has the feeling of being cobbled together rather than presenting a unified analysis. There are many small experiments and their figures are all over the place. This makes it hard for the reader to understand key takeaways. For example, section 5.1 seems to have 5 "mini" experiments, each asking a different question. Some experiments seem more tangential (e.g. training for more steps) and can be moved to appendix. While it's hard for me to tell exactly what needs to be fixed here, overall this section needs to be rewritten so the reader can easily get key takeaways. Can the authors attempt to make this section easier to read?
**PVM is an implementation trick for 2D environments rather than a contribution.** Related to my major concern on using only artificial 2D environments - the authors propose Persistence of Vision memory, which stitches together multiple recent images into one. This is more of an implementation trick for 2D environments. Can we give baselines access to PVM, or remove PVM from SUGARL for comparison? I see there is already an ablation for PVM, but I would like to see this across all tasks. Alternatively, the authors can evaluate in a 3D environment (like robotics, navigation, etc.) and show a 3D version of PVM. Either way would make it more general.
**SUGARL reward on non agent-centric tasks.** The SUGARL reward rewards the sensory policy for choosing views with low action prediction error. However, will this still work in tasks where relevant parts of the image are unrelated to the action? For example, Pong and Pinball have moving small balls that are important to look at, but the balls give little information about the action.
**Reproducibility.** No codebase or promise of code release of the Active-Gym benchmark or SUGARL algorithm.
1. Cheng, Ricson, Arpit Agarwal, and Katerina Fragkiadaki. "Reinforcement learning of active vision for manipulating objects under occlusions." _Conference on Robot Learning_. PMLR, 2018.
2. Wang, Hanqing, et al. "Active visual information gathering for vision-language navigation." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXII 16. Springer International Publishing, 2020.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ## Questions from the limitations:
Can the authors provide results on a more realistic benchmark where active perception is required, like manipulation or benchmark?
Can the authors provide a more "fair" comparison with baselines by either removing or adding PVM?
Does the SUGARL reward work in environments where action prediction is not helpful for solving the task?
## Clarification questions:
It says that both policies are trained with a shared reward function. Does this mean that the motor policy can also optimize the sugarl reward? If so, couldn't this lead to degenerate case where motor policy only moves in very predictable ways?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors say one sentence for limitations - that Active-RL learns more slowly than full observations. This is fairly obvious, and I would like to see more discussion on other limitations. I can suggest some.
This framework makes many assumptions about the active perception itself. First, it assumes a clean separation between motor and sensory actions - they do not affect each other. However, in many tasks, *interactive perception* is required - a robot may choose to move its arm away from the camera or open a box to remove occlusion. These actions can count as both motor and sensory actions. Another assumption is that the active perception actions happen instantly alongside motor actions, and have no cost. This is not true in the real world. Can SUGARL handles these assumptions, or what extensions would be necessary to address these assumptions?
Next, what are the compute and resource constraints of training an Active-RL policy versus a normal policy? Can the authors provide more details about GPU usage, runtime of experiments, etc.?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing the effectiveness of our approach and providing thoughtful comments. Following the suggestions, we conducted addtional experiments on more realistic robot environments.
**1\. Active perception settings**
Following the suggestion, we tested our algorithm on a more realistic robot control (simulated) environment, Robosuite [Zhu et al., 2020]. In this environment, the camera motion is controlled by 5-DoF control: relative (x, y, z, yaw, pitch), and is constrained by maximum linear and angular velocities. Under this constrained active perception setting, we confirm that SUGARL works the best.
In the Atari results we provided, we also test SUGARL with relative control (SUGARL(rel), Table 1(a) in the paper). This control only allows the view to move to nearby locations. From the experimental result we observe that it sometimes can outperform absolute control.
**2\. Robotics task**
We followed the suggestion and conducted new experiments on Robosuite. Results are available in the general response and Table 1 in the rebuttal PDF. In short, SUGARL achieves best performance on 3 of 5 challenging manipulation tasks (are all harder tasks), and outperforms hand-coded views most of the time on all tasks.
**3\. Organization of experiment section**
Thanks for the comment, we will improve this section accordingly. For example, having a list of questions we try to answer at the beginning of this section.
**4\. Contribution of PVM**
Please see our general response (2. Designs of PVMs). We conducted extra experiments on Robosuite where we make several new instantiations of PVM for this 3D case. The results are in Table 1 of the rebuttal PDF. We find the version for the 3D environment is effective in combining partial observations spatially and temporally. PVM is about finding an appropriate function to combine partial observations spatially and temporally, and we provide several versions of in these new experiments.
Following the suggestion, we tested SUGARL w/o PVM on Robosuite and DMC (Tables 1, 2 in the rebuttal PDF). We confirm that PVM is an important component.
**5\. Non-agent centric tasks/sensory policy behavior**
The sensory policy is learned and will focus on the necessary part. It is not forced to pay attention to specific parts like surroundings. In the meantime, the presence of environmental reward encourages both policies to be task-oriented. So the sensory action won't overly satisfy the intrinsic reward.
One example of this is the visualization of Assault (the middle of Row 2) in the supplementary website available in the caption of Figure 6. The sensory policy focuses on the enemies (top) rather than the agent (bottom). In the visualization of Pong (Row 7), the sensory policy observes the ball many times.
**6\. Effects of shared reward function**
The final reward we use is the combination of environmental reward $r^{env}$ and our intrinsic reward $r^{sugarl}$. With the presence of environmental reward, the motor policy won’t overly satisfy intrinsic reward. The reward balance is the key and can be determined in a principled way from previous research [Hasselt et al., 2016, Henderson et al., 2018, Choi et al., 2019]. We verify its effectiveness and robustness across different visual settings in Atari and DMC. We didn’t observe a highly predictable motor policy from our current study.
**7\. Discussion on limitations**
Following the suggestion, we provide a limitation section which we will incorporate to the revised paper.
In this work, we assume that completely independent sensory and motor actions are present in an embodied agent. But in a real-world case, the movement of the sensors may depend on the motor actions. For example, a fixed camera attached to the end-effector of a robot manipulator, or to a mobility robot. To address the potential dependence and conflicts between two policies in this case, extensions like voting or weighing across two actions to decide the final action may be required. The proposed algorithm also assumes a chance to adjust viewpoints at every step. This could be challenging for applications where the operational or latency costs for adjusting the sensors are high like remote control. To resolve this, additional penalties on sensory action and larger memorization capability are potentially needed. Last, the intrinsic reward currently only considers the accuracy of agent-centric prediction. Other incentives like gathering novel information or prediction accuracy over other objects in the environment can be further explored.
**8\. Reproducibility and resources**
We will open-source our approach and the benchmark, including the robotics environment.
The resources required are listed below, measured using a NVIDIA A5000 GPU. The speed is mostly bottlenecked by the underlying simulator (if only one thread is used), not our algorithm.
| Environment | Algorithm | Steps | RAM | GPU VRAM | Wall-clock Time/Task |
|-------------|--------------|-------|-----|----------|----------------------|
| Atari | DQN | 1M | 18G | 1.3G | 1.2 hrs |
| Atari | SUGARL-DQN | 1M | 18G | 1.7G | 1.6 hrs |
| DMC | DrQv2 | 100K | 18G | 2.5G | 1.4 hrs |
| DMC | SUGARL-DrQv2 | 100K | 18G | 2.8G | 1.7 hrs |
| Robosuite | DrQv2 | 100K | 54G | 3.9G | 2.5 hrs |
| Robosuite | SUGARL-DrQv2 | 100K | 54G | 4.2G | 3 hrs |
Thank you for reading our responses.
**References**
- Henderson et al., Deep reinforcement learning that matters, AAAI 2018
- Hasselt et al., Learning values across many orders of magnitude, NeurIPS 2016
- Choi et al., Intrinsic motivation driven intuitive physics learning using deep reinforcement learning with intrinsic reward normalization, 2019
- Zhu et al., robosuite: A modular simulation framework and benchmark for robot learning, 2020
---
Rebuttal Comment 1.1:
Title: Good start, hope to see some more progress.
Comment: Thank you for the initial rebuttal, I think it is a good start for addressing my concerns. Updated score from 3 to 4 to reflect the improvement so far.
## More realistic robotics experiments
I appreciate the new robotics experiments. However, they are still not complete - analysis of the experiments is missing. I would like to see some more analysis of them, like videos, comparisons of behavior, and why does Active-RL work or not work in certain tasks compared to the baselines.
I don't get the "Adv over hand coded views" column, can you give more details about this metric? Is it the number of evaluation episodes where one method gets higher returns than another?
## PVM concern is addressed.
The authors show how to use PVM for 3d images, by proposing multiple ways of fusing pixel information over time. This is quite interesting itself, and addresses my concern about PVM.
## Bonus Experiment on Cost of Perception
Can you design a robotics task where perception has a cost? This is a more realistic setup. For example, you can put some energy cost for moving the camera too much. It would be interesting to see how sensory policies learn to efficiently perceive the scene as well.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive feedback. We have the visualization video showing the learned policies at https://drive.google.com/file/d/1Ay-ZdJMF3ekjNe1sa7Gt9fqMgtxb-itb/view?usp=sharing
**Explanations on robotics experiments**
> Policy behavior
We observe that the learned sensory policy exhibits multiple active-vision skills like tracking the end-of-effector, moving to focus on task-oriented objects, or moving to focus on the whole scene. Details are available in the accompanying video.
> why does Active-RL work or not work in certain tasks compared to the baselines
Compared to learning from hand-coded views, we find that SUGARL works better for harder tasks. The movable camera view could potentially gather more information that helps motor policy. In those easier tasks including Lift and Stack, the moving camera may cause unnecessary difficulty in observing the target and the status of the gripper.
> Adv over hand coded views
Sorry for the confusion. For each method other than hand-coded views, we compare it to each of three hand-coded views in each of the five tasks. We count one for having a higher IQM than a hand-coded view in one task. So the maximum is 15 if one method beats all hand-coded views in all tasks. This is to measure how an active-vision agent performs compared to standard RL under the hand-coded view. | Summary: This paper presents a method for what the authors call "active RL", a flavor of RL where the agent not only needs to pick a (motor) action, but also needs to select the (sensory) action where to look. The authors do so by training a DQN or SAC agent with separate heads for the motor and sensory policies respectively. The sensory actions are learned using a separate reward function, which is based on the prediction error when predicting the motor action from two consecutive observations. The authors evaluate on Atari and DMC equipped with limited observability.
Strengths: - Active vision is an important skill that we humans use all the time, but is often overlooked in the RL literature. It's nice to see an "active RL" approach addressing this issue.
- The paper is well written, and presents nice quantitative and qualitative results on both DMC and Atari.
Weaknesses: - I found the very coarse discretization of sensory actions limiting.
- The method seems very dependent on the reward balance hyperparameter.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - In the case no PVM is used, to you still provide the complete frame size, but with the unseen pixels black-ed out. Would it also work when just providing the foveal observation?
- The sugarl reward is focused on predicting the agent's motor action. However, in many games you should rather pay attention to the surroundings, since you "know" your action (i.e. especially when standing still for instance), but you'd want to know if there are new obstacles popping up at the edges. Have you also tried other incentives, for instance expected information gain, which supposedly also underpins human saccading, i.e.
https://www.frontiersin.org/articles/10.3389/fncom.2016.00056/full
https://www.frontiersin.org/articles/10.3389/fnbot.2022.840658/full
- It's interesting that SUGARRL outperforms vanilla RL with full observations on some environments. Do you have any insights on why this might be the case? Is there anything common on the particular environments where the gap is largest?
- Would it help to provide the policy with the current sensory action, i.e. so it knows where it is currently looking at. Although it probably won't make a difference in case the PVM / full observation with black pixels is used, as it can derive it from that. Still, it feels like a waste of compute to process all these black pixels.
- Could the sensory policy be generalized across environments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The main limitation mentioned is the slower training. I'd say the limited set of fixed fixation points is also a limitation of the current implementation. It would be nice to scale this to an agent that e.g. has a continuous pan/tilt control for its camera.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our technical contribution and our analysis. Please find our response below.
**1\. Sensory action space**
We follow the suggestion to test SUGARL on continuous camera control on Robosuite, a simulated robotics environment. The agent uses a 5-DoF relative (x, y, z, yaw, pitch) to control the camera. Our results are available in general response and Table 1 in rebuttal PDF. We find that SUGARL is also effective in continuous sensory action setting for challenging robotics tasks.
**2\. Reward balance**
Similar to other research [Hasselt et al., Henderson et al., Choi et al.], the reward balance hyperparameter is necessary to make rewards at different scales to work. We follow those previous research to select this balance. The hyperparameter is computed as the max/average environmental return normalized by the length of the trajectory. In our experiments on Atari and DMC, we use only one hyperparameter per game/task for all visual settings. SUGARL performs well consistently. And this hyperparameter strategy also works for our new Robosuite experiments. These verify the robustness of balance selection.
**3\. Pixel utilization / foveal observation without black padding**
We find out that providing the foveal observation without padding (IQM=0.400) is not better than the approach with PVM (IQM=0.805). Providing those blacked pixels is helping the performance.
**4\. Sensory policy incentives and behaviors**
Thanks for these inspirational references. We will look into the connections between ours and the references, and will investigate new forms of incentives in the future.
The sensory policy is learned and will focus on the necessary part. It is not forced to pay attention to specific parts like surroundings. The presence of environmental reward encourages both policies to be task-oriented. In this case, the sensory action won't overly satisfy the intrinsic reward. Furthermore, due to the existence of PVM, the agent is able to access observation several steps before. This allows the agent to use information gathered from different locations, and thus supports the potential "saccading" skills.
**5\. Insights on outperforming some full observation baselines**
Our insight is that the partially observable area excludes noisy information that may not be helpful for decision making.
For example, the “scores” either on the top or the bottom of the game screen can be noisy. They are changes in pixels, but they do not mean the change of actual “state” like object locations.
Battle_zone is actually one of them with the largest performance improvements than the full observation model (>3x). As shown in Figure 6, the “fixation” behavior makes visual observation fixed to the center of the screen, which is the only direction the agent can fire a bullet. This "simplified" observation allows easier policy learning. More visualization is available in the caption of Figure 6.
**6\. Providing sensory action location as input**
Thanks for the thoughtful question! PVM indeed implies the sensory action location. And it is able to precisely combine observations spatially. We tried to provide such information to the policy but it does not further improve.
**7\. Sensory policy transfer**
This work is under the single-task setting, so it’s hard to measure the transferability of the learned agent. In visual RL, the transfer of the visual encoder itself could be challenging before studying the transfer of the policy [Hansen et al.], as it has more parameters. To make it work, multi-task (pre-)training might be needed and it is a completely different setting from this paper. The transfer of learned policy is a valuable direction and we will leave it for future work. Thanks for the valuable question!
**8\. Continuous camera control**
We follow the suggestion and conduct new experiments on Robosuite, with continuous control of the camera. According to the results in Table 1 in the general rebuttal PDF, SUGARL works the best. Please also refer to our general response for more details.
Thank you for reading our responses.
**References**:
- Henderson et al., Deep reinforcement learning that matters, AAAI 2018
- Hasselt et al., Learning values across many orders of magnitude, NeurIPS 2016
- Choi et al., Intrinsic motivation driven intuitive physics learning using deep reinforcement learning with intrinsic reward normalization, 2019
- Hansen et al., Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation, NeurIPS 2021
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses to my questions. I especially appreciate the new Robosuite experiment, and increased my score accordingly. | Rebuttal 1:
Rebuttal: **General response:**
We thank all reviewers for inspiring comments and questions. In this general response, we address the common concerns of more experiments on (1) robot manipulator tasks, (2) more baselines, and (3) more designs of proposed Persistence-of-Vision Memory (PVM).
1. **New experiments on robot manipulator tasks**
In addition to Atari and DMC, following the suggestions from the reviewers, we test SUGARL on Robosuite [Zhu et al., 2020], a simulated robotics environment.
**Tasks**: We selected five of available tasks, namely block lifting (Lift), block stacking (Stack), nut assembling (NutAssembleSquare), door opening (Door), wiping the table (Wipe). The first two are easier compared to the later three. Example observations are available in Figure 1 in rebuttal PDF.
**Camera / sensory action setting**:
Active RL agent is allowed to control the camera. We use a 5-DoF **continuous** control: relative (x, y, z, yaw, pitch). The maximum linear and angular velocities are constrained to 0.05/step and 5 degrees/step, respectively. For reference, the dimension of the table is 0.8x0.8.
**Results**:
We compare baselines including RL with object detection (a replication of [Cheng et al., 2018]), learned attention [Tang et al., 2020], and standard RL with hand-coded views. Results are in Table 1 in the new rebuttal PDF. We confirm that our SUGARL works outperforms SOTA baselines all the time, and also outperforms the hand-coded views most of the time. Specifically, for the harder tasks including Wipe, Door, and NutAssemblySquare, SUGARL gets the best scores.
2. **More SOTA baselines**
Following the reviewer suggestion, we implement two SOTA baselines.
- **DrQ w/ Object Detection**: Since [Cheng et al., 2018] requires goal-conditioning unlike our setting, we extend [Cheng et al., 2018] with a stronger object detector and RL algorithm. We try our best to replicate [Cheng et al., 2020] using a pre-trained object detector DETR [Carion et al., 2020] that provides object bounding boxes in addition to visual observation. Compared to the original version of [Cheng et al., 2018], the reimplemented method has (a) a stronger detector DETR and (b) DrQv2 which is an improved version of DDPG. We remove the goal conditioning in [Cheng et al., 2018].
- **DrQ w/ End-to-End Attention**: We use [Tang et al., 2020] to perform an learnable attention. This approach divides the input image to patches and selects K patches with the highest attention scores for the policy input. We use it along with DrQv2 in order to perform a fair comparison.
From the experimental results in Table 1 in the rebuttal PDF, we find that SUGARL outperforms them by a large margin. They perform similarly or slightly better than the Single Policy baseline. In Table 3 in the rebuttal PDF, we also find that SUGARL outperforms attention approach in Atari.
3. **Designs of PVMs**
The Persistence-of-Vision Memory (PVM) is proposed to be a general framework, and what we present in the paper is one implementation of it (i.e., 2D-image stitching). Following the suggestion of the reviewers, we further provide more instantiations of it. We additionally implemented it using
- **LSTM**: Each image is first encoded by CNN and fed into LSTM.
- **3D Transformation + Stacking**: We use camera parameters to align pixels from different images to the current camera frame. Then the transformed images are simply stacked on the channel axis. CNN encodes the stacked images.
- **3D Transformation + LSTM**: Similar 3D transformation as above, but using an LSTM to encode the images after going through CNN.
We compare these new instantiations to a simple stacking PVM in Robosuite, and results are available in the first 4 rows of Table 1 in rebuttal PDF. We find that 3D Transformation + LSTM works the best, because it tackles spatial aligning and temporal merging together. LSTM also works well in general.
We also test LSTM PVM on Atari to compare it with our originally proposed Stitching PVM. The results are available in Table 3 of the rebuttal PDF. We find that LSTM PVM does not outperform Stitching PVM because Stitching can combine observations exactly according to their spatial locations.
**Details of robosuite experiments**
- **Hand-coded view baselines:** We select three hand-coded views that come with Robosuite -- **Front View**, **Agent View**, and **Eye-in-hand View**. These are used for the standard RL. Front View has the camera facing towards the robot. Agent View has the camera closer to the robot than Front View and more concentrated to the working area (e.g. table top or the door). The Eye-in-hand View is from the camera attached to the end effector of the robot. Front View and Agent View are static views while the Eye-in-hand View camera moves together with the end effector. Examples are available in Figure 1 in rebuttal PDF.
- **Training:** Each task is trained for 100k steps. The size of the replay buffer is 100k. Each task is trained for 3 seeds, with 10 evaluation runs at the end of the training.
- **Metrics:** We report the absolute IQM value from a total of 30 evaluations.
Thank you for reading our responses.
**References**
- Carion et al., End-to-End Object Detection with Transformers, ECCV 2020
- Cheng et al., Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions, CoRL 2018
- Tang et al., Neuroevolution of Self-Interpretable Agents, GECCO 2020
- Zhu et al., robosuite: A modular simulation framework and benchmark for robot learning, 2020
Pdf: /pdf/3aa67f329e25c0c99f65d76ab3a049b4cf0fae34.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work focuses on reinforcement learning with a controllable perceptive field where the agent not only learns a motor policy but learns a sensor policy at the same time. The sensor policy controls the information to be obtained. This work proposes to coordinate two policies by introducing an intrinsic reward encouraging the sensor policy to provide observation correlates to the decision making. The proposed method is evaluated on modified DMC and atari benchmarks to show the effectiveness of the proposed method.
Strengths: 1. The proposed method is intuitive and easy to implement, enough technical details are included for future reproducement.
2. This work performed extensive ablation study over different components of the proposed method and showed effectiveness of different components.
3. This work provides clear visualization of learned sensor policy for better understanding of the work, and discusses the behavior of the sensor policy under different observation resolutions.
4. The overall writing of the work is good, the paper is easy to follow and easy to understand.
Weaknesses: 1. The problem itself is not novel, previous work[1] has studied similar settings with more challenging tasks, separately modeling the sensor policy and the motor policy should not be considered a contribution as well. With only proposed new intrinsic reward functions, the novelty of the work is relatively limited.
2. The proposed Persistence-of-Vision Memory module is specifically designed for the given task, which can hardly apply to more realistic tasks like moving cameras with different poses.
3. According to the detailed experiment results in supplementary materials, the experiment results are super noisy and the performance varies a lot according to the environment.
4. Some baselines like DrQ / DQN with separate heads for motor policy and sensory policy should be compared to validate the method. These baselines should not be considered as part of the ablation study.
[1] Cheng, et al; Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. How does DrQ or DQN with separate heads for sensor policy and motor policy work in different benchmarks?
2. How does other forms of memory work in this case, like LSTM which is much more generalizable to other applications compared with the proposed PVM module.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Potential negative societal impact is not applicable to this work. For POMDP, providing some kind of memory mechanism for agent to learn a better policy is quite standard practice in the community, some baseline with LSTM or other memory mechanism should be compared.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments and questions. Please see below for our response that addresses the concerns.
**1\. Novelty of this research**
We would like to highlight our contributions as follows:
- The decomposition of policies governing sensory and motor actions. It’s a different approach compared to conventional single-policy approaches including [Cheng et al., 2018]. We experimentally show that ours works much better (Table 1 in the rebuttal PDF) on difficult Robosuite manipulator tasks.
- The new approach to enable learning of such decomposed policies, based on joint learning with intrinsic reward. And we empirically show the joint learning approach is easy to implement by simple extensions from many existing RL algorithms, and it works well.
- We bring up the concept of PVM and show different instantiations of it. We confirm their effectiveness through experiments in 2D and 3D environments.
- We provide comprehensive analysis on sensory policy behavior that helps understanding and is beneficial for further sensory policy / joint learning algorithm design.
And another key difference is that [Cheng et al., 2018] was originally designed for goal-conditioned RL, which has access to the privileged information.
**2\. More instantiations of Persistence-of-Vision Memory (PVM)**
The Persistence-of-Vision Memory is proposed to be a general framework that combines multiple recent partial observations into one using a function $f()$. What we present in the paper is one implementation of it (i.e., 2D-image stitching), highlighting its necessity. Following the suggestion, we further provide more instantiations of $f()$. We additionally implemented:
- **LSTM-based PVM**: Each image is first encoded by CNN and then fed into LSTM
- **3D Transformation + Stacking**: We use camera parameters to align pixels from different images to the current camera frame. Then the transformed images are simply stacked on the channel axis.
- **3D Transformation + LSTM**: Similar 3D transformation as above, but using an LSTM to encode the image sequence.
The results in Table 1 in rebuttal PDF show that they can handle 3D environments. In particular, the approach of using both temporal LSTM and 3D alignment for PVM is most effective. It can effectively handle the camera motion and improve policy learning.
The motivation behind the PVM design is to combine visual observations from multiple **spatio-temporal** locations. We study its influence on the challenging active vision + RL problems. While recurrent models like LSTMs could provide a memorization capability to a certain degree, explicit spatial-temporal encoding (e.g., 2D stitching or 3D Transformation) further improves the memory.
**3\. Performance and learning curves**
The performance varies across environments due to different environment properties, which is very common in RL algorithms. For example, in Appendix E of DreamerV2 [Hafner et al., 2021], Figure 3 of SimPle [Kaiser et al., 2020], the performance also varies a lot.
For the curves, we show the raw episodic return during training without smoothing. They are noisy due to random explorations in the learning process. We will improve the visualization to make it clear.
**4\. More comparison using separate heads**
Following this suggestion, we provide more results of DrQ with separate heads (i.e. **SUGARL-DrQ w/o Joint Learning**) in Robosuite (Table 1 in the rebuttal PDF) and in DMC (Table 2 in the rebuttal PDF). Both experiments show separated heads are not producing better results than full SUGARL.
Thank you for reading our responses.
**References**
- Cheng et al., Reinforcement Learning of Active Vision for Manipulating Objects under Occlusions, CoRL 2018
- Hafner et al., Mastering Atari with Discrete World Models, ICLR 2021
- Kaiser et al., Model Based Reinforcement Learning for Atari, ICLR 2020
---
Rebuttal Comment 1.1:
Comment: After reading the response, I'd like to raise my score to borderline accept. The new instantiations of PVM look reasonable, the authors provide some new results in the response PDF, it would be great if the authors could include all new results in the modified version | null | null | null | null | null | null |
Orthogonal Gradient Boosting for Interpretable Additive Rule Ensembles | Reject | Summary: In this paper, the authors propose a gradient-boosting algorithmic framework for rule ensemble learning, emphasizing the interpretability of produced rule set. Various gradient-boosting algorithms are reviewed in the rule-learning context, and the authors argue that a specific boosting algorithm, called fully corrective orthogonal gradient boosting (FCOGB), is particularly suited for rule boosting. The intuition is that existing additive rule-boosting procedures operate in a strictly greedy fashion - the weight of each added rule is fixed in later iterations. In contrast, FCOGB allows the weights of preceding rules to be adjusted in each later iteration, which may help to reduce the number of required rules (to reach a certain accuracy) and thus the cognitive complexity of the final rule set. Based on FCOGB, the authors derive the stepwise boosting objective function for single rule search, which is similar to existing gradient boosting objectives but with a different regularization term. The overall algorithm looks like the conjugate gradient method - in each iteration, the rule aligning best with the gradient in the orthogonal complement of the subspace spanned by previous rules is added. The authors demonstrate the effectiveness of FCOGB through experimental comparison with existing rule-boosting algorithms on classification, regression, and Poisson regression tasks.
Strengths: - Applying FCOGB to rule learning, to the best of my knowledge, is a novel idea, and the authors provide a comprehensible justification for this choice. Figure 2 is helpful in understanding the difference between FCOGB and existing rule-boosting algorithms.
- How to search the optimal rule in each iteration is especially considered, which is a key step in rule boosting. The authors propose a strategy that exploits the nice structure in the boosting objective function to speed up the bound calculation in branch-and-bound search of optimal rules.
- The proposed algorithm is evaluated on a wide range of datasets and tasks. The authors provide a detailed analysis of the results. Figure 1 clearly shows that FCOGB can achieve a better accuracy-risk trade-off than existing rule-boosting algorithms.
Weaknesses: - The presentation of the paper can be improved. For example:
+ In the "Rule Boosting" section, the "Gradient boosting" subsection mixes the description of general gradient boosting and the more specific rule boosting. This makes it hard to understand these objectives for readers who are not familiar with the rule-boosting literature. For example, obj_gb(q) = |g^T q|/||q|| is nonstandard in the general gradient boosting literature. It would be better to separate the general gradient boosting and rule boosting parts.
+ The "Single rule optimization" subsection assumes too much prior knowledge about the rule learning literature. I would suggest the authors merge this subsection with the "4.3 Efficient Implementation" subsection to make the paper more fluent and self-contained.
+ The authors should provide more details about the proposed algorithm, especially the BnB/beam search of a single rule.
- Lack of comparison with rule induction algorithms based on column generation, e.g., [30] and [b]. In the column generation approach, the weights of all added rules are also adjusted in each iteration when solving the restricted master problem, which is similar to FCOGB. I am interested in how FCOGB compares with this approach experimentally.
- The presentation of the prefix optimization problem is misleading. The authors claim that "This function can be efficiently computed for many objective functions by pre-sorting the data in time O(n log n)" in Section 3, but is this true for the objective function obj_{ogb}(q)? The authors should clarify this point. I cannot immediately see how the optimal solution to (2) under this objective function is contained in the prefix of the data sorted by some (what?) criterion. If this is true, the authors should provide a proof or a reference to support this claim.
- There is a mistake in Lines 233-234.
- Missing references:
+ [a] Jonathan Eckstein, Noam Goldberg. An Improved Branch-and-Bound Method for Maximum Monomial Agreement. INFORMS Journal on Computing, 2012.
+ [b] Jonathan Eckstein, Ai Kagawa, Noam Goldberg. REPR: Rule-Enhanced Penalized Regression. INFORMS Journal on Optimization, 2019.
+ [c] Fan Yang, et al. Learning Interpretable Decision Rule Sets: A Submodular Optimization Approach. NeurIPS 2021.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The test accuracy of FCOGB is not generally better than that of existing rule-boosting algorithms. Can this be explored in more detail?
- How the averaged, normalised training risks in Table 1 are computed? An exact formula would be helpful.
- It is well-known in recent rule learning literature ([6] and [c]) that the tic-tac-toe dataset can be perfectly learned by eight 3-CNF rules. Can FCOGB learn the perfect rule set on this dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The limitations of this work are not explicitly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful comments and generally positive evaluation. Below we first address your questions and then provide some clarifications regarding your other concerns.
**Test performance**
The results in the original submission unfortunately strongly undersold the proposed method in terms of test risk because they were based on the methods’ performance without regularization. This affected the proposed method the most, as, due to its improved objective, it fits the training data tightest and is therefore most prone to overfitting if unregularised. *Please have a look at the global rebuttal and the included pdf to see that, when appropriate L2-regularization is performed, the proposed method is the best in terms of test risk for 23 of the 34 datasets, and its advantage over the other boosting variants is statistically significant for both training and test risk (with appropriate Bonferroni correction)*. We hope that these additional results positively affect your overall evaluation of our work.
NB The initial experiment was designed without regularization for the sake of a streamlined process and to focus on the effect of the new objective function in train error minimisation (which it directly affects as opposed to test risk, which is only affected indirectly).
**Train / test risk formula**
For each number of rules $k$, the normalized train and test risks are computed as the ratio of the risk of the ensemble $f_k$ to the base risk $f_0$ of the empty ensemble, i.e., $R(f_k)/R(f_0)$. This normalization was performed to make the results for different datasets comparable.
**Learned rules for tic-tac-toe**
This is indeed an interesting benchmark for the comparison of statistical to logical learners. Unfortunately, the proposed method does not learn the complete rule set of tic-tac-toe, neither within the first six rules nor within the first eleven rules. This can be explained by the overall greedy nature of the boosting approach combined with objective function that rewards picking relatively general "statistical" rules. For instance all boosting variants pick as first rule (weight varies based on regularization parameter):
`+1.388 if middle-middle=x`
Within the class of the considered statistical learner, we can see that the proposed algorithm does substantially improve the risk / complexity trade-off over the other boosting variants and SIRUS (see results).
**Core contributions of the paper and greedy bound computation**
We would like to clarify these two issues that we did not communicate clearly in the submitted version of the paper. After the derivation of a novel objective function that anticipates weight corrections, the second technical core contribution of the paper is to provide an algorithm that allows to efficiently compute objective values for incremental subsets of data points in a time that depends only linearly on the number of data points $n$ (in contrast to the quadratic dependency on $n$ that results from a naive implementation). Solving this problem is central for both, branch-and-bound search where typically the bounding function is approximated by optimizing over a prefix sequence of pre-sorted data points as well as greedy search where incremental cut-points have to be evaluated per input variable.
We then opted to “package” the novel objective function with branch-and-bound search, because it can be expected that this choice emphasizes the differences with the previous objective functions better (in particular Ref 3, which also uses branch-and-bound). However, this branch-and-bound approach is, as of now, only empirical/heuristic in nature. While we did mention this issue in the conclusion section and in detail in the supplementary information, the main text itself was indeed misleading. We are eager to correct that in the published version and thank the reviewer for pointing this out.
To provide more details, we use the prefix order with respect to the sums of selected gradient statistics divided by the norms of the corresponding query vectors. In extensive numerical experiments with subset sizes of up to 20 we find that the prefix greedy approach approximates the optimal subset objective value with at least a rate of 0.75 and in 90% of the cases with a rate of at least 0.9. These experiments are summarized in Section B of the supplementarity information (and details can be found in the notebook “analysis/greedy_analysis.ipynb” in the submitted codebase). Based on these results, we effectively use the branch-and-bound search as heuristic 0.75-factor approximation algorithm. As stated in the conclusion, we consider coming up with rigorous bounds as one of the most interesting questions related to the new objective function that are currently unanswered.
**Comparison with column generation methods**
We agree that the column generation approach is very interesting and are eager to compare its performance to boosting. Preliminary results based on seven datasets and our own ad hoc implementation of Ref 30 using Gurobi (we were not able to obtain an implementation from the authors) show the following: When capping the computation time to not exceed that of boosting too much (1000s for an individual fit), column generation does not improve the complexity / accuracy trade-off, in particular it tends to generate longer rules and to not or only slightly improve training risk. Given the intricacies of the implementation and the design of an insightful experimental setup (e.g., with reasonable caps on computation time) as well as the substantial amount of space required for a satisfactory exposition, we believe that this comparison deserves its own full paper with theoretical discussion and is out-of-scope of the current work, which primarily focusses on the comparison of boosting objectives.
**Other**
Thank you for pointing out the missing references and suggestions for improving the presentation. These will be incorporated.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed clarification and additional experiments. Given that the prefix greedy approach is heuristic and the 0.75-factor approximation is only justified by empirical observation, I will keep my score unchanged. | Summary: This paper introduces a novel approach to gradient boosting of decision rules for interpretable machine learning models. By incorporating a
weight correction step and orthogonal projections, the method maximizes predictive gain per rule.
Their experimental evaluation on various classification, regression, and Poisson regression tasks confirms that the resulting rule learner
enhances the trade-off between comprehensibility and accuracy in the fitted ensemble. Moreover, it maintains a comparable computational
cost to previous branch-and-bound rule learners.
Strengths: 1. Originality: The paper introduces the first rule boosting algorithm that consistently optimizes the accuracy/complexity trade-off of produced rule sets. This represents a novel contribution to the field.
2. Quality: The research exhibits high quality as it adopts the fully corrective boosting approach, which entails re-optimizing all rule consequents in each boosting round. The study's rigorous algorithm development provides a strong foundation for the research, ensuring
the reliability and robustness of the findings.
3. Clarity: The paper explains the new objective function for selecting individual rule bodies, the corresponding efficient algorithm for cutpoint
search along with some other algorithm details. The clear explanations contribute to the overall clarity of the research.
4. Significance: The research demonstrates significant improvements over previous boosting variants in terms of the risk/complexity tradeoff.
The better risk reduction per rule and the affinity to select simpler rules contribute to the overall significance of the findings. Additionally,
the comparable computational cost to previous approaches adds to the practical relevance of the research.
Weaknesses: In terms of the compared established methods, SIRUS [1] is the most recent work included in the analysis, published in 2021. However, it is
worth noting that some more recent publications, such as [2,3], are not included in the experiment section.
One limitation of the paper's presentation is the heavy reliance on text and equations, with less emphasis on the use of figures and intuitive
example case studies. This approach may hinder the reader's ability to grasp complex concepts and visualize the practical applications of
the proposed methods. Incorporating more visual aids, such as figures and illustrative examples, could enhance the clarity and accessibility
of the research.
[1] C. Bénard, G. Biau, S. Da Veiga, and E. Scornet. Interpretable random forests via rule extraction. In International Conference on Artificial
Intelligence and Statistics, pages 937–945. PMLR, 2021.
[2] Souza V F, Cicalese F, Laber E, et al. Decision Trees with Short Explainable Rules[J]. Advances in Neural Information Processing
Systems, 2022, 35: 12365-12379.
[3] Calzavara S, Cazzaro L, Lucchese C, et al. Explainable Global Fairness Verification of Tree-Based Classifiers[C]//2023 IEEE Conference
on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023: 1-17.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you provide some insights into the reason why certain newly published works, such as [1,2], were not included in the experiment
section? Were there any specific criteria or limitations that influenced the selection of methods for comparison?
[1] Souza V F, Cicalese F, Laber E, et al. Decision Trees with Short Explainable Rules[J]. Advances in Neural Information Processing
Systems, 2022, 35: 12365-12379.
[2] Calzavara S, Cazzaro L, Lucchese C, et al. Explainable Global Fairness Verification of Tree-Based Classifiers[C]//2023 IEEE Conference
on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023: 1-17.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: A limitation of the study is that while it includes the most recent work, SIRUS [1], which was published in 2021, it does not incorporate some
more recent publications like [2,3] in the experiment section. This omission limits the comprehensiveness of the analysis and may overlook
potential advancements or alternative approaches introduced in these newer works.
[1] C. Bénard, G. Biau, S. Da Veiga, and E. Scornet. Interpretable random forests via rule extraction. In International Conference on Artificial
Intelligence and Statistics, pages 937–945. PMLR, 2021.
[2] Souza V F, Cicalese F, Laber E, et al. Decision Trees with Short Explainable Rules[J]. Advances in Neural Information Processing
Systems, 2022, 35: 12365-12379.
[3] Calzavara S, Cazzaro L, Lucchese C, et al. Explainable Global Fairness Verification of Tree-Based Classifiers[C]//2023 IEEE Conference
on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2023: 1-17.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the positive evaluation of our work as well as for pointing out recent developments in the related literature.
Based on a first assessment the work of Calzavara et al. is a related but clearly distinct problem: the extraction of a rule set from a single tree that is causally fair (wrt to some definition) but not necessarily optimizing the trade-off between rule set size and predictive performance. The other work of Souza et al. could be an interesting point of comparison in the future that is representative for the line of work of generating small individual decision trees. However, note that to our understanding this work is concerned only with classification trees that perfectly classify the given training data. Hence it does not directly fit into our empirical comparison, which, in contrast, considers methods that can learn arbitrary statistical response models as long as the target variable is modeled conditionally with an exponential family distribution. In particular, we use logistic regression, least squares regression, and Poisson regression, which can all not be treated by the approach described in Souza et al.
Moreover, we appreciate the idea of including more illustrations in the work. The uploaded pdf in the general rebuttal shows more examples of actual rule ensembles and the effect of the new algorithm on the whole size/accuracy trade-off curve. We plan to include these either in the updated appendix or even in the main text if sufficient room can be made.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your clarification, I have read your response. | Summary: The paper presents a new algorithm for learning rule ensembles and claims that these are interpretable, but does not present any support.
Strengths: The proposed method is reasonable, but, in the context of other work in this area, not ground-shaking. The experimental evaluation is done well, but does not touch on interpretability.
Weaknesses: There is no evidence that the learned rule sets are interpretable.
Rule complexity has not much to do with cognitive complexity.
The efficiency of algorithm is over-stated in the paper.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: This is another paper that claims that rule ensembles are interpretable. No evidence is presented to that end, the claim is just derived from the fact that rules are, by themselves, interpretable. However, for example, random forests are well-known to be not interpretable, and they are also just a rule ensemble. In addition, the situation is even worse here, because in a random forest at least each individual rule is interpretable, and may be viewed as an explanation for all the examples it covers. In an additive boosting setting, this property also does not hold, because each rule corrects and refines predictions of previous rules, so rules can no longer be interpreted in isolation, but only in the context of all previous rules. Even a single example can not be easily explained by a gradient-boosted rule set, because one would have to understand the interaction of multiple rules.
The authors use the term "cognitive complexity" for something that is essentially the size of a rule set. Again, this is a complete misnomer, as the cognitive effort to parse a rule set does not only depend on the size of the theory. As explained above, there might be dependencies between rules, or rules may be considered in isolation (the latter having a much lower cognitive complexity). There are also factors such as the familiarity with the used concepts. For example, the cognitive effort required to read a page of text in your mother tongue is much lower than the cognitive effort required to read a page in a language that you are just learning, even though both, the content, as well as the syntactic length (essentially the author's measure of cognitive complexity) is the same.
It is a pity that they authors make such unfounded claims about intepretability, where they could simply present their work as an attempt to learn a simpler rule ensemble. As such, the work is reasonable, but also not great break-through. What they essentially propose (following previous work) is to re-optimize all weights once a new rule is added, and build an efficient algorithm around that idea. It gains a little in performance, as can be expected, but it is not great break-through.
The small advantage seems to be bought with an increase in computation time, which the authors interpret as "in the same order of magnitude" except for one case, where it is by a factor 26 slower. Actually, it seems to be the case that in most of the datasets, the algorithm is at least a factor of 2 smaller, sometimes worse.
Minor comments:
Some of the numbers in Table 1 are obviously wrong (e.g., testing risks of 109.5 or 4.115 for XGB).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: As stated in the global rebuttal, we take all the concerns regarding the nomenclature around interpretability and the term cognitive complexity serious, and we are happy to modify the language to reflect that gains in ensemble simplicity do not necessarily imply an interpretable model in absolute cognitive terms. As a whole, however, we believe your review misrepresents the actual core claims of the paper and does not recognise its main contributions.
Firstly, the paper explicitly agrees with your assessment that not all rule ensembles can be interpretable for the simple reason that they can be too long to ever be parsed by one or even a group of interpreters. This is why the paper is based on the widely accepted premise that, all other cognitive factors being equal, shorter rule ensemble are relatively more interpretable than longer rule ensembles (alternatively, if one wants to insist on a dichotomous notion of “interpretability”, one could say “stand a better chance of being interpretable“).
Secondly, based on that motivation the paper proposes a novel and technically non-trivial algorithmic approach to improve the trade-off of the ensemble size and the predictive performance. This approach is based on the idea of a weight correction step after each boosting round, which, as you point out, is described elsewhere. However, the main technical contributions of our work, which you do not mention, are then
1. to derive from this idea a novel sound objective function that successfully identifies the optimal rule to add when anticipating the subsequent weight updates (Theorem 4.3) and
2. to derive an efficient algorithm for optimising this objective function over incremental sequences of cut points, as it is required to practically use an objective function in either greedy or branch-and-bound rule learning.
Both of these are original and non-trivial. Moreover, we demonstrate clearly that this approach allows the learning of rule ensembles with a better complexity / accuracy trade-off. While you assess the margins of improvement on average as not “ground-shaking” they show consistently over a wide range of settings, and, if you look, e.g., at the specific examples shown in Figure 1 of the paper or the newly uploaded figure in the global rebuttal, the improvements can be quite impressive for individual prediction risk levels.
Finally, we would like to defend our statements regarding the efficiency of the algorithm. Firstly, a factor of two can be considered a price easy and worth to pay in scenarios where simpler rule ensemble are desired (it is therefore typically regarded as the same order of magnitude). Moreover, algorithmic details and certainly the implementation of newly published approaches tend to be suboptimal at first publication and are subsequently improved in an incremental fashion. Finally, in the theoretical asymptotic sense the proposed algorithm can certainly be considered efficient relative to straightforward solutions. Through Contribution 2, it enables all cut point evaluations for one input variable in the main loop with a time complexity growing only linearly in $n$ as opposed to quadratically, which one would achieve with a naive implementation.
---
Rebuttal Comment 1.1:
Comment: I think you are missing the point. Additive rule ensembles by itself are not easily interpretable regardless of their complexity. Take a simple problem: We have one real-valued attribute in the range [0,10) and want to learn the rule for trunking off the digits after the commas.
A conventional rule learning algorithm learns:
* IF x < 1 then 0
* IF x in [1,2) then 1
...
* IF x > 9 then 9
You get the idea.
Now what does an additive rule ensemble learn? Something like
* IF true then 5
* IF x in [0,5) then subtract 2
* IF x in [5,10) then add 2
* IF x in [0,2) and x in [5,7) then subtract 1
* IF x in [2,5) and x in [7,10) then add 1
etc.
The complexity of both rule sets is similar (in the same order of magnitude, if you want), but the first one is easily interpretable, but the second is not. You can construct similar examples with classification as well. The point is that in conventional rule sets, a single rule can be used for an explanation, whereas in additive rule ensembles every rule depends on all other rules. The number of rules and thus the complexity of the learned set or ensemble is not that important.
Having said that, I repeat that I don't think that your contributions are the main problem, it is the way you present them. I would have liked your paper much better if you were not trying to oversell its contributions. This concerns not only interpretability, but also run-time. I agree that others such as the ones you cite do equate interpretability with complexity, but many of them learn rule sets, not additive ensembles, and (without checking) I dare say that only few actually dared to call syntactic complexity "cognitive complexity". In any case, if you research the literature on interpretability you find several papers that have in user studies obtained the correlation between interpretability and low complexity is only weak, often non-existing, or even negative. As I wrote, if you had simply put your paper as an approach for learning smaller ensembles, this would be quite o.k.
The same with the run-time complexity. If you would have argued in the paper that a disadvantage of a factor of 2 is maybe not that bad, I would certainly not have complained, but glossing over such a difference with "in the same order of magnitude" is overselling.
Try to analyze your algorithm objectively, and reviewers like myself would certainly be more positive. We all know that you cannot develop an algorithm that beats everything else in all regards, which makes papers that claim to have succeeded already suspicious. Even algorithms that are worse than others can be valuable contributions because, as you write, once published, an interesting idea can be picked up and improved by other researchers. But the important part is that you do a honest evaluation and not try to sell it with unfounded claims.
I'm not sure what the discussion will yield, and what the handling chair will eventually decide, but I would certainly rather have a re-review of a thorough revision of this paper at another conference than to accept it here. Should it be accepted nevertheless, I do hope that you tone down you claims accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
You say again that the contributions of the paper are not the problem but that your reason for rejection is that they are “oversold” in a way that can only be addressed by a thorough revision. We agree that some formulations should be changed, but, on inspection of the paper, find it hard to see why this cannot be done with a couple of specific adjustments. Further, we had no intention of overselling results or to claim that, as you suggest, the proposed algorithm “beats everything else in all regards”, as indicated by our rather measured conclusion (l323) that "the proposed [...] approach *is a worthwhile alternative to previously published boosting variants for rule learning*, especially when targeting a beneficial risk-complexity trade-off and an overall small number of rules."
We below provide a list of implicit and explicit claims made in the paper about its contributions that might have created the impression of “overselling”. For each of them, we give a justification or a simple reformulation that we intend to apply and we hope addresses your concerns. We hope that revisiting these points will lead you to a more favourable evaluation of our work:
1. The title “Orthogonal Gradient Boosting for Interpretable Additive Rule Ensembles”. We now understand that using the term interpretable especially in absolute terms might raise wrong expectations. Therefore we intend to use instead: “Orthogonal Gradient Boosting for Simpler Additive Rule Ensembles”. Similar we intend to change other mentions of the absolute term “interpretable”.
2. Using the term “*cognitive* complexity in terms of the number and length of rules” in the introduction and elsewhere. We understand that this term might raise a wrong impression and therefore intend to simply use “complexity” throughout the paper. We originally used the qualifier *cognitive* simply to differentiate it from "computational complexity", which is also sometimes considered for rule ensembles. Also note that the paper gave the definition as "in terms of number and length of rules" and therefore was clear about the intended meaning.
3. The explicit contribution statement that says that we "develop the first rule boosting algorithm that consistently optimizes the accuracy/complexity trade-off" and "derive a corresponding efficient algorithm for cut-point search”. We still consider both statements appropriate, as the novel objective function provably takes the weight correction into account and therefore increases the risk reduction per rule, and the cut point search algorithm is theoretically efficient, as its cost only depends linearly on the data size instead of quadratically as does the naive implementation.
4. The summary of the evaluation that says that the "computational cost remains *comparable* to the previous branch-and-bound learner". That point was also reinforced in the evaluation section where it was said that the new algorithms has "costs in the same order of magnitude" as XGB "except for one extreme case". While formulations like “comparable” and “same order of magnitude” are not uncommon to describe differences of a factor that is closer to 2 than to 10, we agree that these are at least ambiguous and can raise a wrong impression. We therefore intend to replace this by a more explicit summary of "an only modest computational overhead of a factor of about 2 for 26 of 34 datasets and a factor of less than 5 for seven of the remaining eight datasets."
Please let us know if there are other formulations that you think are problematic and we are happy to address those as well.
On a separate note let us comment on your general concerns about additive rule ensembles. Your example is indeed a nice benchmark problem where gradient boosted additive rule ensembles might not be the optimal choice. However, note that the additive rule ensemble that you suppose would be learned does not seem correct (rules 4 and 5 given have an empty coverage, perhaps you intended to use an “or” instead, but that is not something we consider). More generally, the class of additive rule ensembles contains the ideal ensemble that you sketch, and, depending on the employed approach, gradient boosting, or more likely column generation, might learn it. That said, we agree that GB will likely learn overlapping rules instead of the non-overlapping ones that are ideal in this scenario. If we know that this constraint is desirable, simply using rules from a single decision tree is indeed a better approach. However, there are lots of prediction problems where there are several independent additive effects, and in these cases overlapping rules are much more appropriate. Take for example risk factors of coronary heart disease. In these cases we would like to see two rule (e.g., +2 if smoker, +2 if obese) refining the general population risk log odds rather than listing all non-overlapping alternatives. Gradient boosting would be well equipped to produce exactly that. | Summary: This paper introduces Fully-Corrective Orthogonal Gradient Boosting (FCOGB), a novel algorithm aimed at facilitating interpretable rule learning. The study contends that existing rule learning algorithms often yield complex models that pose challenges for interpretation. FCOGB addresses this concern by generating simpler and more easily understandable models.
FCOGB is an extension of the widely employed gradient boosting algorithm, utilized for constructing predictive models. It employs a branch-and-bound search algorithm to identify the optimal set of rules that minimize prediction errors.
Strengths: The proposed method is supported by theoretical justifications and intuitive explanations using figures. Additionally, the paper proposes algorithms with computational complexity analysis to efficiently implement the method, demonstrating practical applicability.
Weaknesses: Despite an increase in the required training time (takes several times longer computation), the generalization performance does not improve. If this weakness is addressed, I believe it would become a very strong paper.
(Minor comments)
- Despite Figure 2 being referenced on page 5, the figure is actually inserted on page 3.
- The scatters plot in Figure 3 are difficult to interpret due to overlapping points. Please set alpha (transparency).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Table 1 evaluates the risk for cognitive complexity values ranging from 1 to 50 and provides the averaged risk. However, I'm wondering if there are any trends or patterns in the variations. For example, do the superior algorithms change when cognitive complexity is fixed at 5 compared to when it is fixed at 25? Additionally, if the range is changed to 1 to 25 or extended to 1 to 100, would the results be different? I believe this perspective would enhance the sense of conviction, and it might provide an approach for improving test error in your method.
- Are there any test error improvement strategies that can be considered? Otherwise, from the perspective of interpretability, are there cases where it is desirable for only the training error to decrease without an improvement in generalization performance (overfitting)?
- While SIRUS is presented as state-of-the-art, as seen in Figure 1, its performance looks unsuitable as a comparative reference. Do you have any idea why SIRUS does not work well in this example?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful comments and hope you will consider upgrading your evaluation in the light of the following clarifications.
**Generalization performance / test risk**
You identified an unsatisfactory performance in terms of the test risk as the main weakness of the paper and assessed that, if addressed, "it would become a very strong paper". *As the results uploaded with the global rebuttal show, the test risk performance can in fact be fixed easily by introducing regularization to all the boosting variants.* Specifically, we now performed L2-regularization where the regularization hyper-parameter is tuned by an internal 5-fold cross-validation on the training set (and that tuning is performed for each of the targeted number of rules individually). This results in the proposed method having the best average test risk (over all considered complexity levels) for 23 of 34 datasets and the advantage compared to all other boosting variants to be statistically significant (for both train and test with appropriate Bonferroni correction). It is sensible that the proposed method benefits the most from regularization, given that it fits the training data most tightly for a given number of rules and is therefore, without regularization, more prone to overfitting for the larger considered cognitive complexity levels.
**Effect of changing the considered complexity horizon**
As you indicate, this question is related to the overfitting/regularization issue: in terms of training risk, the proposed method which fits the data tightest per rule comes up on top over a range of maximal complexity levels. In contrast, in terms of test error, it can backfire and, without regularization, enter the overfitting regime before other methods. However, that depends on the individual dataset. In the two examples in Figure 1, the test advantage of the proposed method is actually largest when going up to complexity 100. To focus on smaller and thus more interpretable complexity levels, as well as to speed up the overall computation time of the experiments, we originally decided to focus on 50. However, we appreciate the idea of including the results for the other maximal values and will do so in the revised supplementary information.
**Test error improvements / relevance of the training error**
As stated above, simple L2-regularization already achieves the desired results. Further improvements can likely be achieved by designing regularization terms that specifically punish longer rules. We focussed originally on the training error because it is a measure of how well the various objective functions approximate the “idealized boosting objective” of finding the single rule that allows to maximally reduce the training risk, and we believe that this is important in its own right. Notably, the advantage of the proposed method is retained when considering the regularized training risk instead (as reflected in the uploaded results in the global rebuttal).
**Value of SIRUS baseline**
Note that SIRUS is included as the state-of-the-art “generate-and-filter” method, and it was previously shown that gradient boosting tends to be superior to that approach. Hence, the main baselines are the previously published gradient boosting variants. We believe that the main reason for the problems of generate-and-filter, especially based on random forests, is that the individual rules present in the pool are not generated with the purpose of being good models in the context of small rule ensembles. Therefore there is a tendency that many of them are required to achieve a reasonable predictive performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your updates.
Given the improvement in generalization performance, I would like to change the score from 4 to 5. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful comments and their mostly positive assessment. In addition to the individual rebuttals, we would like to clarify here two central points:
1. The test performance of the proposed algorithm and the fact that it can be easily improved by regularization (see updated results attached).
2. The main contributions of the paper and how they are related to interpretability .
Regarding the test error comparison we are happy to report that the proposed method in fact significantly outperforms all baselines when performing adequate regularization. In the originally submitted experiment we omitted regularization in an effort to keep the workflow as simple as possible and to focus on the direct effect of the new objective function on the training error. However, this led to sometimes large degrees of overfitting especially for the proposed method, which, due to the improved objective function, generally fits the training data tighter for a given number of rules. When introducing L2-regularization to all boosting variants and choosing the corresponding lambda value by 5-fold cross validation on the training set, the proposed method has the best test performance on 23 out of 34 datasets, and a Bonferroni corrected t-test shows that the improvement over all other boosting variants is statistically significant for both train and test error. Please see the table in the attached pdf for detailed results.
Regarding the notion of interpretability, we would like to acknowledge that claims regarding interpretability in absolute terms may indeed require considerations of cognitive psychology and involve the familiarity of the investigator with symbols among other things. We are happy to modulate the language in the paper accordingly. That being said, we do maintain that, all other cognitive variables being equal, shorter rule ensembles stand a better chance of being interpretable than longer rule ensembles, which is in line with lots of work published in the rule learning community (e.g., Refs 2, 3, 6, 10, 14, 29). We believe this to be also self-evident in the examples of concrete rule ensembles presented in Figure 1 of the paper as well as in Figure 1 of the pdf attached to this rebuttal. These cases demonstrate that the proposed method produces much shorter and easier to interpret results in several concrete examples, which is in addition to the positive large-scale quantitative evaluation that has been performed on a wide range of prediction tasks.
These advantages are enabled by the two main technical contributions of the paper: a novel objective function that identifies optimal rule bodies to add when taking a subsequent weight correction step into account as well as in an efficient algorithm to compute this objective function for an incrementally increasing collection of data points. The first allows a more stringent optimization of the length versus accuracy trade-off of rule ensembles. The latter is crucial to practically employ the new objective function in branch-and-bound as well as greedy rule search.
Pdf: /pdf/bf34314915220bdc2dd521c6930da8297a07ea07.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a framework of fully corrective orthogonal boosting. The main algorithmic difference here is the objective for each next weak model. It is the cosine between the gradient (which is orthogonal to previous weak learners by construction) and the part of the new model that is orthogonal to previous models. Authors motivate their work by the need of interpretable models, so they restrict themselves to the case of rules as weak learners. Also, they use a variant of b&b algorithm for optimal weak learner search instead of commonly used greedy construction in depth.
Authors claim that this the paper proposes an algorithm for constructing shorter and more interpretable rules for Gradient Boosting of Decision Rules model. Experiments show that described method outperforms standard implementations of GB in case of using models of low complexity.
Although theoretical part is sound, practical questions are not thoroughly addressed or answered.
Strengths: The main part of the paper is well-written, the terms, designations and ideas are clear. The proposed method is sound, reasonable and well described. The idea of orthogonal rule search in conjunction with fully-corrective goosting looks good. The theoretical part is described very well, the main formulations are correct, and the obtained contributions look important and are novel to the best of my knowledge. The proposed algorithm is justified and has the potential to compete with SOTA in the outlined formulation that refer to "cognitive complexity".
The main part of the paper is well-written. The terms, designations, and ideas are clear.
The only point I did not buy is the Poisson loss defined in line 109, in my opinion, incorrectly (or unclear), because, formally, from that definition, its minimum is at $f(x_i)=0$ independently on $y_i$.
Weaknesses: I have the following concerns about the research direction itself. Claimed advantage of ensembles of rules over ensembles of trees is their human interpretability. However, I cannot agree that the decisions a rule ensemble makes can be treated as interpretable. Particularly, I argue that in the domains where interpretation is important summation of even two terms is usually not interpretable for humans. Most critical decisions in such domains like medicine and justice, partly science and risk management are usually based on several binary factors, not a sum of dozens of rules. Where ensembles of rules are really used in practice?
Second, I am disappointed that the term "cognitive complexity" was left without any background. I would expect references to some papers using this metric or explicit statement that this way to estimate models' complexity is originally proposed in the current paper. Futhermore, I would expect some consideration of actual research in psychology domain that address the problem of cognitive complexity of calculations.
For example, we can see in "Human knowledge models: Learning applied knowledge from the data." Plos one, 2022, by E. Dudyrev et al., that a human decision is usually based on:
- Boolean operators: OR, AND, NOT, and thresholded Boolean SUM (arithmetic sum of
Boolean variables, compared to an integer threshold)
- At most four (Boolean) variables, where each variable is used at most once\
These ideas are rather far from the concept of sums of dozens of rules
See also:
Lemonidis C., “Mental Computation and Estimation: Implications for mathematics education research, teaching and learning”, 2015,
Marois R et al, "Capacity limits of information processing in the brain," Trends in cognitive sciences, vol. 9, no. 6, pp. 296–305, 2005
Nys J. et al, "Complex Mental Arithmetic: The Contribution of the Number Sense," Canadian journal of experimental psychology, vol. 64, no. 3, pp. 215–220, 2010.
At last, but not least, the experimental part spoils the impression of the work and requires improvements:
- First of all, I see no hyperparameter tuning step description (e.g. regularization terms for XGB, number of boosting rounds, length of decision rules) in the section on experiments. Are there any hyperparameters which may have a significant impact on the performance of FCOGB? Where they left "defaulted" or were they were tuned by a separate step of an algorithm?
- In the beginning of Section 5, it is mentioned that you use only 5 runs for each dataset with < 50 cognitive complexity (CC) limit. But then I see averaging over complexities between 1 and 50 in the description. What does it mean? I suppose that CC may alter in different runs but it is limited to 50, is it true? Or did authors perform exhaustive search of all possible CCs and averaged over them? If the first is true, I have a doubt that different model may have had different mean CC values, so that it is not quite fair comparison results. Is the second is true, then it is unclear why such an averaging can prove something
- It would be interesting to see the dynamics of quality with respect to increasing CC. In particular, some graphs that plots quality vs CC to see which algorithm uses the CC limit more effectively.
- It may be useful to provide comparison with other variants of fully-corrective boosting implementations since the quality gain may origin from described by this scheme only
- Time limitations should be discussed more in terms of time per CC point and pareto curves (time to achieve the desired quality)
- How should we interpret relatively low quality for regression problems?
- This paper addresses interpretability of trained decision rules, so it would be profitable to demonstrate a difference in the simplicity of interpretation for FCOGB rules and, e.g., XGB rules
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: - Equation (4) goes far from original GB. The point $(Q_{t-1};g) \alpha^*$ in the functional space is used as a target for the next step of rule search. You can go one step further this way: use the strict minimizer $f^*$ of the loss functional as a target instead of $(Q_{t-1};g) \alpha^*$. This looks a simpler choice, which can also increase boosting convergence and thus simplicity of the trained ensemble. Did you consider this way?
- How b&b approach to rule search matches previous literature on rule gradient boosting?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I do not see any particular limitation of the proposed work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the constructive feedback and overall positive assessment. Please find below point-by-point responses of your concerns and questions.
**Usage of the terms interpretability and cognitive complexity**
We understand your comments regarding the interpretability of rule ensembles and are happy to modulate the language around this notion (see global rebuttal). We do maintain however that, all other cognitive variables being equal, shorter rule ensembles are relatively more interpretable than longer rule ensembles, which is in line with many published works in the rule learning community (e.g., Refs 2, 3, 6, 10, 14, 29). We invite you to inspect again the examples shown in Figure 1 as a support of this intuitive claim as well as the pdf attached to the global rebuttal. Moreover, we would like to clarify that the employed complexity measure was already used in other work (Refs 2 and 6). We used the term “cognitive complexity” in our work mainly to disambiguate the concept from “computational complexity”, which can also be considered a complexity measure for models (in terms of the expected number of iterations one has to carry out to compute a prediction). As expressed in the global rebuttal, we are happy to change the wording to avoid misleading conclusions. Moreover, we plan to point to the provided references to highlight the difference in focus to work in cognition science.
**Description of hyper parameter tuning**
In the submitted version we aimed to keep experiments as simple as possible and avoided hyper-parameters by not applying any regularization or upper limit for the rule length. For the number of rules we simply ran all boosting variants until the designated complexity level was reached and then considered all intermediate complexities of rule ensembles that can be produced by a method (see also clarification below regarding the complexity levels). As described in the global rebuttal, not using regularization kept the focus mainly on the training performance that could be achieved with a certain complexity and inevitably led to overfitting for the largest complexity levels. Based on the reviewer comments we have since incorporated regularization into all boosting methods by choosing lambda values from a fixed grid via performing an internal 5-fold cross-validation on the training set. Indeed with this modification the results are much more favorable for the proposed method in terms of test performance (see pdf in global rebuttal).
**Clarification of complexity levels**
Ensembles for all complexity levels are considered that a method can produce (up the maximum considered). To realize the different ensembles the number of rules is varied. Indeed, for different repetitions based on different training/test splits, the achieved complexities and risk levels differ. Hence we consider average complexity / risk trade-off curves. As the exemplary curves shown in Figure 1 demonstrate, usually risk advantages are achieved on a wide range of complexity levels (nb we believe that these are the curves that you ask for). However, to obtain a simple quantitative summary per dataset we then consider averaged risks overall complexity levels, which leads to the overall statistical comparison between methods.
**The performance for regression versus classification problems**
The achieved risks for regression are typically lower than those for classification for the considered complexity levels. We believe this is due to a better alignment of the boosting objective with the actual error minimizing direction due to the simpler squared loss when compared with the logistic loss (which requires many iteration even when exact gradient descent can be performed, which is not the case for gradient boosting where we approximate search directions with the available rules).
The possibility of using the strict minimiser in Equation 4
This would indeed be the optimal rule to add and correspond to what we can refer to as “idealized boosting” with weight correction (see also Ref 26), which considers this variant for theoretical investigation. However, note that in practice we have to find the optimal rule by implicitly searching the space of all rules and it is completely unclear how to perform this optimization efficiently for this idealized objective. In fact, all the various boosting objectives can be considered approximations to this ideal objective, and we can see based on the training performance that, among the known choices, the function proposed in our work is the best such approximation.
**Branch-and-bound rule search in previous literature**
Branch-and-bound search have been considered for the simple objectives in separate-and-conquer rule learners (see 11) and subgroup discovery (see 15). In the context of gradient rule boosting, to our knowledge, using branch-and-bound search was proposed in Ref 3, however using the XGB objective function. In our work we adopt it to the refined objective function. However, we should note that our technical contributions go beyond the usage of branch-and-bound, as the proposed objective function can also be optimized with the more typical greedy algorithm and the fast incremental computations provided in our paper are also necessary in this approach.
**Other points**
We appreciate the other suggestions and will provide more detailed comparison curves in the supplementary material as well as an ablation study to show the impact of individual aspects of the proposed methods, in particular the performance when using the faster greedy algorithm for single rule optimization. Finally, there was indeed a typo in the Poisson loss. The correct definition is $l_{poi}(f (x_i), y_i) = y_i\log y_i − y_i f(x_i) − y_i + \exp(f(x_i))$. | null | null | null | null | null | null |
Versatile Energy-Based Probabilistic Models for High Energy Physics | Accept (poster) | Summary: In this paper, the authors describe an energy-based generative framework for modeling the behavior of elementary particles. This framework can then be used to generate High Energy Physics events, similar to those at LHC, as well as be used for anomaly detection, specifically QCD jets.
Strengths: The strength of the paper lies in the clarity of writing, technical detail, and interesting application of the proposed methodology.
Weaknesses: The main weakness of the paper is that it is hard to gauge how the improvements of the methodology over the baselines would actually translate in improvements of event detection at LHC.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In the paper the line "The results of generative sampling are shown in Appendix C" refers to actually refer to Appendix B.2. Please correct this if it is indeed the case. Also, please describe more the dataset used (for example, does it have class imbalances?) and consider including some confusion matrix or supplementary methods to better compare EBM-CLF to ParticleNet
2. Please include in figure 4 left, the curve for EMB-CLF(E(x)) AUC(Top).
3. Table 2 is a bit hard to read because of the many shades of gray and the inclusion of the background QCD classes. Please reconsider how to show the comparison more clearly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer bfxx,
Thank you for taking the time to review our manuscript and providing thoughtful feedback. We have made revisions according to the comments. The following is the correspondence to the specific questions/comments.
-----
***Weaknesses:***
*The main weakness of the paper is that it is hard to gauge how the improvements of the methodology over
the baselines would actually translate in improvements of event detection at LHC.*
First, one of the most important advantages of these machine learning assisted methods is that they facilitate model-independent signal detection. That means we have a paradigm shift from theoretical-model-driven methods to data-driven methods effectively. The traditional approach of designing theoretical models, predicting signals, and excluding models is limiting in the sense that they can not cover all possible signal detection channels. With deep learning models directly trained on background events, we are now able to detect unobserved signals despite the underlying theoretical model. Based on that, we are trying to build deep learning models that are robust and sensitive to different possible signals. That's why we think EBM is a very promising candidate, given that it can be trained on background events in an unsupervised manner and it displays beautiful mass decorrelation without any auxiliary tasks (Fig. 3).
***Questions:***
1.1 *In the paper the line "The results of generative sampling are shown in Appendix C" refers to actually refer to Appendix B.2.
Please correct this if it is indeed the case.*
Thank you for pointing this out. This mismatch actually originates from the revision we made in the appendices during the supplementary material preparation phase. If you look at the supplementary material (we also included the main body for better readability), you can find that the reference to the appendices is correct.
1.2 *Also, please describe more the dataset used (for example, does it have class imbalances?) and consider including some confusion
matrix or supplementary methods to better compare EBM-CLF to ParticleNet*
The datasets are described in Appendix A. There is no class imbalance. We have further clarified a few details in the revised manuscript. We quote the relevant part here for reference:
> For the hybrid model EBM-CLF, we train on 300,000 simulated Standard Model jets (100,000 QCD jets, 100,000 boosted jets originating from the W boson, and 100,000 boosted jets originating from the top quark).
We also included the confusion matrices for reference in the attached pdf in the global rebuttal.
2. *Please include in figure 4 left, the curve for EMB-CLF(E(x)) AUC(Top).*
Figure 4 (left) has been updated. Please refer to the attached pdf in the global rebuttal.
3. *Table 2 is a bit hard to read because of the many shades of gray and the inclusion of the background QCD classes.
Please reconsider how to show the comparison more clearly.*
Thank you for the suggestion. We have adjusted Table 2 to increase readability. Please refer to the pdf attached to the global rebuttal. Let us know if it is clear enough.
-----
We hope that we have addressed all the concerns raised. Please let us know if you have any further comments or questions.
Best regards,
Authors
---
Rebuttal Comment 1.1:
Title: Response following rebuttal
Comment: Dear authors,
Thank you for the time taken to write the rebuttal and addressing the points I raised. | Summary: This paper proposes to use EBMs to model the fundamental interactions between elementary particles. The paper first introduces a set of techniques to enable effective learning of energy functions across elementary particles, including the use of a classifier to model the conditional distributions. The paper then proposes as an architecture to capture permutation invariant interaction between particles. The paper proposes a set of metrics to measure the generation quality of EBMs as a substitution for maximum likelihood estimation and illustrate how EBMs can effectively detect anomalies between particles.
Strengths: - The paper tackles an interesting and timely problem and explore how generative models (EBMs) can be used in the scientific application of particle dynamics.
- The paper is clearly written and experiments demonstrates the utility of EBMs at both modeling the behaviors of underlying particles and detecting spurious patterns
- The introduction compelling introduces and motivates the problem studied by the paper
- In comparison to many other generative models, EBMs are much less well studied, and this paper represents an original and interesting application of EBMs to different use cases
- The method section has a variety of interesting modifications and tricks over prior work (such as the removal of the entropy term) that are interesting to read
Weaknesses: - It might be interesting to see the extent to which the energy function learned by EBM correlates to real physical energy.
- It might also be interesting to structure/augment the energy function with ground truth physical energy functions
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Xiwj,
Thank you for taking to time to review our manuscript and providing thoughtful feedback. The following is our correspondence to the weaknesses and questions.
-----
***Weaknesses:***
>*It might be interesting to see the extent to which the energy function learned by EBM correlates to real physical energy.
It might also be interesting to structure/augment the energy function with ground truth physical energy functions*
Thank you for bringing up this very interesting question. First, we would like to clarify that the probabilistic energy and the physics energy are not the same. Though in some application cases, they might coincide, for instance in the Restricted Boltzmann Machine describing an Ising model. However, in the current setting, the energy of the probabilistic model is more of a concept trying to describe the radiation patterns within jets. If we restrict all the training jets to a specific physics energy (say 600 GeV), there are still variations in the radiation patterns and jet substructures. These variations are what the energy in the EBM is trying to describe and encode.
In addition, in order not to cause confusion, we added a brief clarification in the revised manuscript.
-----
Hope we have addressed all the concerns. Please let us know if you have any other questions or comments. We would be happy to have further discussion.
Best regards,\
Authors | Summary: This work applies recent EBM techniques to model jet streams from the LHC. The work adopts a transformer architecture for the EBM to allow a permutation invariant representation that captures high order relations between particles. EBM models of LHC particles are learned using techniques derived from recent image EBM works. Samples generated from the model are shown to closely match statistics of the observed data, and OOD detection with the learned EBM is performed and shown to have superior performance compared to VAE methods.
Strengths: * This paper is the first to explore EBM learning in the area of high-energy physics. It is exciting and interesting to see research into EBMs for new and challenging domains.
* Experimental results provide convincing evidence of the efficacy of the learned model for generation and OOD.
Weaknesses: * The paper lacks theoretical and methodological novelty. The work primarily focuses on applications of EBMs to a new domain.
* It appears that the primary practical application of the paper is OOD detection. In this case, it might be more effective to use an EBM designed specifically for OOD, as in [a]. I am not sure use of a generative EBM is well-justified. Tailoring the methodology more towards OOD would increase the focus of the paper, which tends to wander somewhat.
[a] https://arxiv.org/pdf/2010.03759.pdf
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * In the view of the authors, what are the most important novel aspects of their work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Limitations were not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer PfZ9,
Thank you for taking to time to review our manuscript and providing thoughtful feedback.
-----
***Weaknesses:***
> *The paper lacks theoretical and methodological novelty. The work primarily focuses on applications of EBMs to a new domain.*
> *It appears that the primary practical application of the paper is OOD detection. In this case, it might be more effective to use an EBM designed specifically for OOD, as in [a]. I am not sure use of a generative EBM is well-justified. Tailoring the methodology more towards OOD would increase the focus of the paper, which tends to wander somewhat.*
Thanks for the feedback. In this work, we established an energy-based probabilistic modelling framework for High Energy Physics events. As we elaborated in the manuscript (in both the *Introduction* and the *Problem Statement*), the EBM for HEP is a multi-purpose framework established for High Energy Physics.
Actually, OOD detection is only one of the practical sides of the EBM for HEP. And one of the most interesting part of EBM for detection OOD is that the energy score is generally decorrelated with jet mass (see the paragraph starting from line 263 in section 4.2).
We appreciate the suggestion of "tailoring the methodology more towards OOD would increase the focus of the paper", unfortunately, we can not agree with that approach. That would entail another OOD detection focused work, which is not the main purpose of our work.
***Questions:***
> *In the view of the authors, what are the most important novel aspects of their work?*
As stated above, we established an energy-based multi-purpose learning framework for High Energy Physics. This work is a holistic approach to integrating important machine learning methods and techniques in a scientific domain. It solves the most important tasks for the Large Hadron Collider physics in a data-driven manner.
In addition, one important advantage of this work is that the designed EBM provides us with a flexible framework to incorporate many complex tasks. It paves for a multi-purpose deep model that is adaptive, multi-functioning, performant, and robust. For instance, we can build an EBM-based event generator that is controllable and facilitates generation prompts.
***Limitations:***
> Limitations were not discussed.
We briefly mentioned the limitation of MCMC sampling in footnote 1. There is one limitation we consider important is the training and generation speed, since there are MCMC steps involved in each training iteration. We were able to balance training quality and speed by using short-run MCMC (24 LD steps) in training iterations. It is also worth exploring other solutions to accelerate test-time generation. We would be happy to extend the discussion in the revised manuscript or in the appendix.
-----
Hope we have addressed all of your concerns. Please let us know if you have any further questions or comments.
Best regards, \
Authors
---
Rebuttal Comment 1.1:
Title: Thanks for the author responses. I will keep my score for now.
Comment: Thanks to the authors for their responses to myself and other reviewers. I find the work done by the authors to be very interesting and relevant from the HEP perspective. However, I still find the methodology to be essentially the same as commonly used EBM methodology in the image domain. Thus, I am not sure whether this work will have a high degree of impact from the machine learning perspective, and I choose to keep my score for now. I would be willing to reconsider my score if there is a clear contribution to EBM learning beyond what is commonly performed in the image domain.
---
Reply to Comment 1.1.1:
Comment: Though we acknowledge the reviewer's perspective, we emphasize that this work is submitted under the “Machine Learning for Physical Sciences” track. This work's focus and primary objective is to solve scientific problems with advanced machine learning techniques.
AI for Science projects often transcend methodological novelty, as seen in AlphaFold's usage of the established attention mechanism, without diminishing its scientific impact.
NeurIPS is a general venue that encourages interdisciplinary submissions. Limiting NeurIPS to methodological novelty could narrow its scientific vision and hinder interdisciplinary scientific communication. We thank the reviewer for their understanding. | Summary: This paper explores the use of energy based models to modeling the distribution of jets in high energy particle physics. Here, the data consists of a vector of events, each of which is described by a momentum transverse to the beam and 2D spatial coordinates. The goal is to learn the true distribution of jets from real data via an energy based model, which can subsequently be used for Out-Of-Distribution detection and classification, in the manner of the paper "Your classifier is secretly an energy based model and you should treat it like one".
The model is trained with gradient ascent, using Langevin sampling to estimate the intractable expectation in the loss function, and making use of the (persistent) Contrastive Divergence of the loss. An additional KL term to minimize the difference between the true model distribution and the Langevin approximation is used.
Strengths: The paper appears to achieve success in a difficult domain of application. Their validation suggests that the energy based model they train successfully captures the desired data distribution, at least up to well-known features. Furthermore, they show that the energy based model can be altered to a supervised setting, and used as a classifier.
The use of the transformer as the neural architecture to model inter-particle interactions also appears to be a successful choice.
Weaknesses: The ideas in this paper were often conveyed poorly, and it was quite hard to understand the technical details, either of the energy based model training procedure, or of the scientific problem in question.
In equation 2, x+ and x- were not clearly explained, making the paper hard to read for someone unfamiliar with the details of contrastive divergence. Using the expectation notation of (in Latex) $E_{x^+ \sim p_D}$ would have helped clarify that the x+ are drawn from the data distribution and the x- from the (approximation of) the model distribution.
Algorithm 1 describes the "stopping gradient", but more information is needed here.
The equation in Figure 1 are blurry - it needs to be redone in higher resolution.
More explanation is needed for the anomaly detection task. I did not understand the set up in adequate detail. You say: "With an energy-based model, which is trained on the QCD background events or directly on the slightly signal-contaminated data, we expect unseen signals to have higher energies and correspondingly lower likelihoods". But what is a signal vs a background event here? This needs to be clear for a non-HEP audience.
What the labels are in the classification task is unclear.
The paper notes that "A more intriguing property of EBMs is that spurious correlations can be better handled", but it wasn't clear to me either what spurious correlations referred to here, or why EBMs would be better.
"More interestingly, compositionality can be easily incorporated within the framework of EBMs by simply summing up the energy functions ": while I understand what is meant here, the word "compositionality" means many things in different contexts, and is too vague. While other papers on EBMs have used "compositionality" in this sense, you should either cite them, or explain directly what you mean by it.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Which are the original contributions of the paper on the machine learning side, if any? In particular, is the particular version of Contrastive Divergence used, (in the section KL Divergence-Improved EBM Training) key to your success here? I'd like to see more discussion of this KL term, which is quite interesting.
I would be interested to know how this paper fits into a larger research project. Are there further directions that could be pursued, in terms of applications? Did the authors find the approach satisfactory, or were there limitations that concerned them?
Suggestions:
I was able to understand the structure of the paper fully only after reading the following two papers: https://arxiv.org/pdf/1912.03263.pdf and https://arxiv.org/pdf/1903.08689.pdf. The first describes the three applications of energy models you are interested in (generative modeling, OOD detection, and classification). The second describes the Contrastive divergence approach, and cites detailed notes which explain its derivation; your training algorithm appears to be a modification of their Algorithm 1. I would summarize your paper as "an application of the approach of these papers to HEP Jet modeling". While you do cite both of these papers, I would recommend highlighting them more, since they form the backbone of your approach and will help the reader to follow along.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Are there ways of incorporating domain specific models of HEP into the current approach? Currently it seems that the physics incorporated into simulators is not leveraged here. It would also be interesting if known expressions or heuristics for the energy could be incorporated into the energy function.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer npbJ,
Thank you for taking the time to review our manuscript and providing thoughtful feedback. We have made revisions to clarify the technical details. The following is the correspondence to the specific questions/comments.
-----
***Weaknesses:***
> 1. *The ideas in this paper were often conveyed poorly, [...]*
Thanks for the feedback. We try our best to make the article understandable for both machine learning practitioners and domain scientists. We have made several clarifications in the revised manuscript (as discussed below).
> 2. *In equation 2, x+ and x- were not clearly explained, [...]*
Thanks for the suggestion. We have incorporated it in our revised manuscript. We have adjusted Eq. 2 as $E_{x^+ \sim p_D(x)} [\nabla_\theta E_\theta (x^+)] - E_{x^- \sim p_\theta (x)} [\nabla_\theta E_\theta (x^-)]$. We also adjusted other equations accordingly to improve clarity.
> 3. *Algorithm 1 describes the "stopping gradient", but more information is needed here.*
We have added an explanation in Line 147 which we quote here: $E_{q(\textbf{x})} [E_{\hat \theta}(\textbf{x})]$ ( $\hat \theta$ denotes stopping gradient for the energy function because the gradient of this extra KL term is only propagated through the MCMC distribution $q_\theta(\textbf{x})$.)
> 4. *The equation in Figure 1 are blurry - it needs to be redone in higher resolution.*
Thanks for the feedback. The image quality of Figure 1 is already improved. Please refer to the pdf attached to the global rebuttal for the improved version.
> 5. *More explanation is needed for the anomaly detection task. [...]*
As indicated in the sentence, the background events are QCD jets, any other non-QCD jets can be considered signals. We clarified this point in the revised manuscript as follows:
> With an energy-based model, [...], we expect unseen signals (i.e., non-QCD jets) to have higher energies and correspondingly lower likelihoods.
> 6. *What the labels are in the classification task is unclear.*
The labels are jet types in the classification task as we are trying to classify different Standard Model jet types (QCD, W, and Top). Clarification is made in the revised manuscript.
> 7. * [...] it wasn't clear to me either what spurious correlations referred to here, or why EBMs would be better.*
Please refer to section 4.2 (Line 163) for more details.
> 8. *While other papers on EBMs have used "compositionality" in this sense, you should either cite them, or explain directly what you mean by it.*
Citations were already present in the manuscript (at the end of this sentence).
***Questions:***
> 1. Which are the original contributions of the paper on the machine learning side, if any? [...]
There is no one single element that could serve as the magic key. The neural architecture in use, the training strategies, the hyper-parameters, the validation methods, and the test-time generation strategy are all crucial for the success. For instance, we used fewer Langevin Dynamics (LD) steps in the training iterations (24) and more steps in validation iterations (128) and test-time sampling (200), and we annealed the step size in test-time sampling.
In our experiments, the extra KL term is helpful in improving generation quality and training stability. The theoretical consideration largely follows that of [1]. Empirically, we found it most helpful to pass through all the LD steps (to improve stability and quality) and simply drop the entropy term (to accelerate training).
> 2. I would be interested to know how this paper fits into a larger research project. [...]
We already presented a few most important applications for the Large Hadron Collider Physics. As stated in the *Introduction*, event generation and OOD detection are among the most crucial tasks in LHC physics.
One important motivation for this work is that the EBM provides us with a flexible framework to incorporate many complex tasks. It paves for a multi-purpose deep model that is adaptive, multi-functioning, performant, and robust. For instance, we can further build an EBM-based event generator that is controllable and facilitates generation prompts.
There is one limitation we consider important is the training and generation speed, since there are MCMC steps involved in each training iteration. We were able to balance training quality and speed by using short-run MCMC (24 LD steps) in training iterations. We will be motivated to further explore solutions to accelerate test-time generation.
> 3. Suggestions:
I was able to understand the structure of the paper fully only after reading the following [...]
Thanks for the suggestion. As summarized in previous responses, this work is a holistic approach to integrating important machine learning techniques in a scientific domain. There are quite a few references we consider important, and the two you mentioned are indeed very helpful. We reworked relevant parts of the manuscript to improve clarity and help the readers to trace back literature efficiently.
***Limitations:***
> Are there ways of incorporating domain specific models of HEP into the current approach? [...]
There are advantages that we take a bottom-up approach to directly learn from data about the underlying radiation patterns. However, it is possible to incorporate more physics inductive biases such as designing Lorentz-equivariant [2] energy models (these models are, however, computation-demanding and could slow down the EBM training even further). But that could be an interesting future research direction.
It is an interesting question of connecting physics energy and the energy in the probabilistic model. Due to the length limitation of the response, please refer to our response to Reviewer Xiwj for more information. Thanks!
[1] Du, et al. "Improved contrastive divergence training of energy based models." (2020).
[2] Bogatskiy, et al. "Lorentz group equivariant neural network for particle physics." (2020).
Best regards,\
Authors
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for this response, it addresses most of my concerns.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer npbJ,
Thank you for your comments. If any of our previous answers are unclear, we would be happy to provide further clarification.
Best regards, \
Authors | Rebuttal 1:
Rebuttal: Dear reviewers and AC,
We thank all the reviewers for your time and helpful feedback to make this article a better version.
In the present study, we endeavor to synthesize various prior initiatives, with the goal of progressing toward a robust, multi-purpose modeling framework tailored for High Energy Physics (HEP) events. By constructing an energy-based model, we facilitate the probabilistic modeling of low-level feature distribution directly. The successful training of these EBMs has enabled us to generate high-quality events. A noteworthy observation is that the EBMs can be employed for model-independent Out-Of-Distribution (OOD) detection, while simultaneously remaining unencumbered by spurious correlations. In addition, the hybrid modelling combining EBM and supervised classification is an example of how we unify different physics tasks (jet classification, anomaly detection, and generative modeling) in a single framework.
According to the reviewers' comments, we have revised the manuscript to improve clarity. In summary,
* We clarified some technical details including but not limited to
* notations in equations: such as $E_{x^+ \sim p_D(x)} [\nabla_\theta E_\theta (x^+)] - E_{x^- \sim p_\theta (x)} [\nabla_\theta E_\theta (x^-)]$ in Eq. 2.
* domain-specific explanations: further explained the anomaly detection procedure in the context of new physics searches.
* experimental details: improved dataset description in Appendix A.
* We fixed a few presentation issues in figures and tables (you can find the revised version in the attached pdf).
We hope that we have adequately responded to all of your inquiries and concerns. Should you have any additional questions or comments, please do not hesitate to let us know. We remain open to further suggestions that may enhance the quality of the manuscript.
Thank you again for your time!
Best regards, \
Authors
Pdf: /pdf/a5bc0b0729ce565ac4390b5124f45a5fed4b6bba.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation | Accept (poster) | Summary: This paper studies the distributed mean estimation problem with communication constraints and central DP. Communication constraints are met by subsampling a Kashin representation of the target vector, and \eps-DP is achieved by adding Gaussian noise. Privacy in the absence of a trusted server can be achieved using a LDP mechanism along with a secure shuffler.
After discussions with the authors, I have increased my score.
Strengths: The work claims to be the first to study the communication-privacy-accuracy tradeoff for DME problem with central DP. It provides a simple scheme that achieves order-optimal error bounds.
Weaknesses: While this is the first work to study the problem, the solution was well-known. Unlike the LDP constraints, there is no tension between the communication cost and privacy in this case. Therefore, in my opinion, the novelty of this work is limited. The results are a straightforward amalgamation of well-known techniques that are independently known to be optimal.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: The results are intuitively correct and are technically sound.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's acknowledgment of the correctness and technical soundness of our results. However, we kindly disagree with the statement that *"Unlike the LDP constraints, there is no tension between the communication cost and privacy in this case."* Indeed, we believe the situation is exactly analogous to the local DP case, and showing this is part of the contribution of our paper. For local DP, one may a priori expect a tension between privacy and communication cost, e.g., privatizing the local messages can increase their entropy and hence make them more difficult to compress. However [Chen et al. 2020] and others have shown that these two constraints can be aligned in a way such that the optimal accuracy is only dictated by one of the two constraints and the less stringent constraint can be satisfied for free. Similarly for central DP, one may a priori expect that compression will increase the sensitivity of the mean and therefore require the addition of larger noise at the central server, and hence lower accuracy, for the same privatization level.
The main contribution of our paper is to show that this does not need to be the case when compression is done randomly and privacy amplification due to random compression is taken into account. We believe this idea of using random compression for privacy amplification is novel and does not appear in the prior literature. In addition, we also characterize the optimal three way trade-off for the shuffling model. While previous research has explored communication efficiency under local DP or secure aggregation, our work stands out as the first to comprehensively characterize the communication-privacy-accuracy trade-off for both the central DP and shuffling models.
[Chen et al. 2020] Breaking the communication-privacy-accuracy trilemma
---
Rebuttal Comment 1.1:
Title: devil's advocate, not an adversary
Comment: Thanks for the clean perspective.
"Similarly for central DP, one may a priori expect that compression will increase the sensitivity of the mean and therefore require the addition of larger noise at the central server, and hence lower accuracy, for the same privatization level."
However, without any local privacy constraints, using Chen et al. (essentially the same random compression as in this work), the optimal accuracy is dictated only by the communication constraint. Now at the central server, adding Gaussian noise to the mean of the compressed data is known to be optimal. Therefore, I was questioning the novelty in terms of the techniques and how they deviate from the straightforward application of Chen et al. and folklore Gaussian mechanism.
---
Reply to Comment 1.1.1:
Title: Response the follow-up comment
Comment: We thank the reviewer for offering additional details regarding the question. We would like to clarify that the straightforward application of the compression in [Chen et al] with the Gaussian mechanism is *strictly sub-optimal*, since the compressed (and properly decompressed) mean has a much higher sensitivity. Consequently, the direct implementation of the standard Gaussian mechanism inherently gives rise to a larger amount of MSE.
The main contribution of this paper is to show that in order to achieve optimal privacy-utility trade-off when the local data is compressed, one has to take into account the randomness in the local compression and carefully leverage it to reduce the amount of DP noise. | Summary: This paper studied the optimal rates of the tradeoff among privacy (central DP), utility ($\ell_2$ estimation error), and communication cost (in bits) for the distributed mean and frequency estimation settings. The authors used Kashin's representation and Coordinate Subsampled Gaussian Mechanism to achieve the optimal rates. They also propose a many-round procedure with a shuffling mechanism to achieve the optimal rate if there is a secure shuffler instead of a secure central server. Simulation results are provided to supper the theoretical results of the distributed mean estimation setting.
Strengths: 1. This paper provides the optimal rates for the distributed mean estimation given specific privacy and communication constraints, which is a fundamental problem in distributed learning (with a central server).
2. The authors proposed the Coordinate Subsampled Gaussian Mechanism and used it to realize the optimal rates.
3. The theoretical results are supported by the simulation results.
Weaknesses: 1. In Theorem 5.3, the rate is $O(C^2 d \ldots)$ while in the appendix Lemma G.1 and Proposition G.2, the rates are $O(c^2 d)$. As Theorem 5.3 is also using Kashin's representation, I think it should be $c=O(\frac{C}{\sqrt{d}})$ as in Line 211. Therefore, there could be something incorrect in the result of Theorem 5.3. However, removing $d$ from the current result also seems to be strange to me.
2. The authors derived the theoretical results for the histogram estimation while not showing corresponding simulation results. They claimed these results in the abstract and introduction as contributions while only showing them in the appendix, which may not be very appropriate.
3. This paper uses the name 'federated learning' many times while the problem being solved is more restricted to the distributed learning (with a central server).
Minor weaknesses:
1. Citation issues: [33] is cited as 'in submission' while it is already published in AISTATS 2023. [59] and [60] are duplicates.
2. Line 91. $10^6-10^9$ -> $10^6$ to $10^9$.
3. In Theorem 4.1, $\sigma$ should use $\Omega$ notation since we need it large enough to guarantee privacy. Similar misuses are in other results. This can be seen in the transition from $\sigma_1^2$ to $\sigma^2$ in Line 630 where the former used $\Omega$ while the latter used $O$.
4. Line 358. $\lceil \log d\rceil + 1)T$ should be $(\lceil \log d\rceil + 1)T$
5. Line 361. The period is missing in the end of this line.
6. The citation of Theorem B.3 is missing.
7. Line 628. $e_2^\epsilon$ should be $e^{\epsilon_2}$.
8. Line 642. The second term in this equality is random which must be incorrect. Nevertheless, the final result is unaffected.
---
I have read the rebuttal which addressed my questions and the weakness concerns.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Line 212. What does the dot product in $x_i = K \cdot \tilde x_i$ mean? Is it a matrix multiplied by a vector?
2. Are the notations of $(\alpha)$ and $(\beta)$ in Equation (4) in Line 217 misplaced? I think $(\alpha)$ is only the first part of Equation (4) and $(\beta)$ is the remaining two parts. Nevertheless, this seems not to affect the analysis in Section 4.1.
3. Line 354. For the statement of $d=5000$, is it from the previous claim based on the theoretical results '$b$ is independent of $d$'? Or is the result of $d=5000$ missing in this manuscript?
4. Line 622 in Section B. Should the sensitivity of $S_j(x^n)$ be $2c$? Should $Z_{i,j}$ be i.i.d. from $\mathrm{Bern}(\gamma)$ instead of $\mathrm{Bern}(1/\gamma)$?
5. Line 630. Should $\log(d/\delta)$ be $\log(d\gamma/\delta)$? Is $\gamma$ missing here (and elsewhere) intentionally?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful reading, valuable questions, and constructive feedback.
**Response to Weaknesses.**
- We apologize for the inconsistency in notation; the rates in Lemma G.1 and Proposition G.2 should be $O(C^2 d ...)$ instead of $O(c^2 d ...)$, and our main results (Theorem 5.3) are indeed correct as stated. We want to clarify that the results stated in Lemma G.1 and Proposition G.2 are already for the $\ell_2$ geometry, not the $\ell_\infty$ one.
- We chose to move our results on histogram estimation (for federated analytics) to the appendix due to the strict page limit, but if this paper is accepted and we are granted an additional page for the full version, we will definitely include the result for histogram estimation together with simmulation results in the main body of the paper.
It is important to note that our scheme is unbiased, so the estimation error is not directly comparable with other frequency estimation schemes (which involve thresholding to reduce estimation error and, hence, are typically biased).
- We believe our setting is indeed well-aligned with standard federated learning settings [Mcmahan et al. 2016, Kairouz et al. 2019]. As described in [Kairouz et al. 2019],
*"Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized."*
To this end, our work focuses on optimizing communication efficiency under central DP or shuffle DP, which are two of the most prominent privacy frameworks for federated learning (another one is local DP which we do not consider in this paper). We characterize the fundamental limits and algorithms to achieve (nearly) optimal communication cost while still achieving the same order of estimation error. Our simulations further support the theoretical results.
[Mcmahan et al. 2016] Communication-efficient learning of deep networks from decentralized data
[Kairouz et al. 2019] Advances and open problems in federated learning
- Lastly, we appreciate the reviewers detailed input about typos and minor issues; we will promptly fix them in the revision.
**Response to Questions.**
- Yes, here $K\in\mathbb{R}^\{d\times D\}$ is the matrix representing the tight frame, and $K\cdot \tilde{x}\_i$ is a matrix multiplication. Regarding Kashin's representation and its use in transforming an $\ell_2$ problem to an $\ell_\infty$ one, we will include a section in the appendix that elaborates on this topic. See also our response to Reviewer rG5G. for more details on Kashin's representation.
- We agree with the reviewer's point that the middle term $O\left(\frac{C^2d^2\log(1/\delta)}{n^2b^2}\right)$ can also be viewed as part of the impact of privatization, as it comes from the DP noise (as stated in Theorem 4.1). However, it becomes significant only when local vectors are drastically compressed (i.e., when the sampling rate $\gamma$ is sufficiently small). We will further clarify this aspect in the revision to avoid any ambiguity.
- We apologize for the confusion regarding the experimental results for $d=5000$. We acknowledge that the previous version had these results but was removed due to space constraints. We will add it back in the appendix for a more comprehensive presentation.
- Thank you for bringing the typos to our attention. We will fix them in the revision. The sensitivity in Section B should be $2c$ (due to our adoption of a replacement notion of DP instead of a removal version), and $Z_{ij} \sim \text{Bern}(\gamma)$. That being said, the final results remain unaffected.
- In line 630, we opt to use an upper bound on $\sigma_1^2$ in order to simplify notation. Also note that we are mainly interested in the regime where $\gamma = \Omega( 1/\sqrt{d})$ (which will be used to achieve the dimension-free communication cost as in Theorem 4.4), so asymptotically $\log(d\gamma/\delta) \asymp \log(\sqrt{d}/\delta) \asymp \log(d/\delta)$. We will clarify it in the proof of Theorem 4.1.
---
Rebuttal Comment 1.1:
Title: Thank you for the response!
Comment: The authors have addressed most of my questions. In the two papers [Mcmahan et al. 2016, Kairouz et al. 2019] provided, the definition of federated learning includes several key properties: Non-IID, Unbalanced, Massively distributed, and Limited communication; They also said, 'An unbalanced and non-IID (identically and independently distributed) data partitioning across a massive number of unreliable devices with limited communication bandwidth was introduced as the defining set of challenges.' Therefore, I still think this paper only provides insights on Distributed Mean Estimation or in general distributed learning, but not more general federated learning (missing the discussion of unbalanced and non-IID settings).
I have raised my score from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the score!
Comment: We thank the reviewer for valuable suggestions and for increasing the score. We agree with the reviewer on the multifaceted nature of FL and that our work centers specifically on its privacy and communication aspects. We also agree with the reviewer's suggestion that exploring more practical scenarios, such as unbalanced data or personalization, is crucial to enhancing the general understanding of FL's real-world implications.
We however would like to briefly clarify that our work does not assume local data to be generated i.i.d. Our goal is to estimate empirical means (a canonical formulation ofFedAvg or FedSGD), which does not involve any distributional assumption on local data/gradients.
Once again, we express our gratitude to the reviewer for their valuable feedback and constructive suggestions. | Summary: The paper proposes a new randomization scheme for differentially private mean estimation. It is based on each user choosing randomly the coordinates to be sent in their messages. Similar scheme has been proposed in reference paper [55] (Hu et al., 2020), however Hu et al. do not sparsify the individual data-element-wise vectors, only mini-batch gradients.
In the proposed scheme, data-element-wise vectors are sparsified randomly and the server averages these sparse vectors and adds normally distributed noise. A classical differential privacy analysis (meaning it uses some classical (eps,delta)-amplification results) is carried out resulting in optimal communication - privacy - accuracy tradeoffs (solving also an open problem in the untrusted server - setting) when these quantities are measured asymptotically.
Algorithms both in trusted server setting and untrusted server (via multimessage shuffling) are given.
The privacy analysis is essentially for the $\ell_\infty$-mean estimation, and the $\ell_2$-mean estimation bounds are obtained using the so called Kashin's representation which turns $\ell_2$-constraints into $\ell_\infty$-constraints in an optimal way.
Strengths: - solves an open problem and given optimal tradeoffs between privacy, accuracy and communication both in trusted and untrusted setting
- Generally well written
- extensive list of references and a comprehensive 'related works' section. I think the paper serves even as a good starting point for diving into this topic.
Weaknesses: - I think some parts are too densely written. The Kashin's representation is not introduced properly I think, those lines 208 to 214 give simply to little information. Like what is that tight frame $K$? I think there is some (still quite short) introduction in reference [34]. I think it would help a lot to have some more material on Kashin's representation, e.g. in the appendix. You use it repeatedly to get the $\ell_2$ mean estimation bounds anyways.
- Related to the above comment, and for further improving writing: in lines 645-646 in the appendix you write "applying the same trick of Kashin's representation"... what are you referring to with 'the same' ? Here also would help expanding / elaborating a bit.
- I think the paper is not entirely explicit about the difficulty of accurate privacy accounting in case of $\ell_2$-mean estimation (see the question comment below in the Questions - section)
- It is not entirely clear how well the method fits to reducing communication in FL training of ML models if one would need to randomly sparsify individual gradients (see comment in the Questions - section)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - You mention that the privacy analysis could possibly be improved, and the log factors shaved off. Somehow I get the impression that you could elaborate on this more... Looking at the trusted server setting, I get the impression that the $\ell_\infty$-mean estimation algorithm one could perhaps analyse tightly (with RDP or directly with approximative DP). In a sense I think you do that in the experimental example. However, again with the $\ell_\infty$-constraints, how would you go about involving the Kashin's representations in the analysis? I get the impression that, e.g., in reference [34], the privacy bounds are simply big-O bounds as there are some universal constant involved when transforming between the constraints. Do you think it would be possible to analyse $\ell_2$-mean estimation algorithm using this coordinate subsampling tightly?
- How would this fit to federated learning? I mean wouldn't the sparsity be lost quickly in case each device/user has model updates constructed using several data-elements? Wouldn't sparsification similar to the that of [55] be then required to reduce the communication?
Minor comments:
- Shouldn't "$\ell_2$ error" in Table 1 be actually "$\ell_2^2$ error ?
- In bibliography, please use the Arxiv reference for the reference [33] instead of "in submission"
- The reference paper [55] (which is quite central here) has been published in IJCAI, you only mention the Arxiv reference.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, I think the limitations with the analysis are adequately discussed in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's comprehensive summary of our work. We would like to clarify the differences between our techniques and those presented in [Hu et al.]. One of the main distinctions is that Hu et al. consider a mini-batch setting, where sparsification, a form of dimensionality reduction, is performed *after* locally perturbing the mini-batch gradients. Although the $\ell_2$ norm of the noise might appear reduced due to the projection step, constructing an unbiased estimator requires multiplying the sampled gradient by a factor of $1/p$, where $p$ is the sampling rate. Consequently, the $\ell_2$ norm of the noise in the unbiased estimator indeed increases by a factor of $1/\sqrt{p}$, resulting in a suboptimal privacy-communication-accuracy trade-off.
In contrast, our techniques independently apply the sampling step to different clients, and the noise is added to the mean of the decompressed gradients, rather than the original (local) uncompressed gradients. Most significantly, the randomness of local sampling is leveraged in our privacy analysis, enabling a tighter trade-off. As the title of our paper suggests we believe the key observation that our paper brings is that the randomness in the local compression can be leveraged for privacy amplification in central DP. It is important to note that the randomness of the sampling/projection in [Hu et al.] is not taken into account in their analysis.
Additionally, we present a communication-efficient scheme that achieves (nearly) optimal communication cost and MSE under a shuffle DP guarantee.
Response to Weaknesses:
- Regarding Kashin's representation, roughly speaking, a tight frame can be interpreted as a "basis" (i.e., a collection of $D$ vectors in $\mathbb{R}^d$ with $D > d$) with redundancy, and under this "basis", one can find a representation of any $x \in \mathbb{R}^d$ with small $\ell_\infty$ norm.
We have kept the overview of Kashin's representation brief in our submission as we felt like this has become a standard tool for DME in the literature, see for example [Chen et al. 2020], but following reviewers' suggestions we will provide a detailed discussion of Kashin's representation and how it can be used to transform the $\ell_2$ problem to an $\ell_\infty$ one in an appendix. Below we provide those details for reviewers' reference.
We first introduce the idea of a tight frame in Kashin's representation. A tight frame is a set of vectors $\\{u_j\\}^D_{j=1} \in \mathbb{R}^d$ that satisfy Parseval's identity, i.e. $ \| x \|^2_2 = \sum_{j=1}^D \left\langle u_j, x \right\rangle^2$ for all $x \in \mathbb{R}^d.$
A frame can be viewed as a generalization of the notion of an orthogonal basis in $\mathbb{R}^d$ for $D>d$.
To increase robustness, we wish the information to be spread evenly across different coefficients, which motivates the following definition of a Kashin's representation:
**Definition.**
For a set of vectors $\\{u_j\\}^D_{j=1}$, we say the expansion $ x = \sum_{j=1}^D a_j u_j $, with $\max_j | a_j | \leq \frac{K}{\sqrt{D}}\| x \|_2 $ is a Kashin's representation of the vector $x$ at level $K$ .
[Chen et al. 2020] Breaking the communication-privacy-accuracy trilemma
- Here, we refer to the same technique in the proof of Corollary 4.3 (which extends the $\ell_\infty$ mean estimation problem to $\ell_2$ mean estimation). Again, this is based on Kashin's representation and the details stated in the beginning of our general response can be easily provided in an appendix. We will also elaborate more in the proof to avoid any confusion.
- For the remaining two points, see our responses to the questions below.
Response to Questions:
- Regarding the theoretical analysis of privacy guarantees, we utilized the strong composition theorem in [Dwork et al. 2010], which is only tight up to some logarithmic factors. We acknowledge that employing more advanced accounting techniques such as the strong composition [Kairouz et al. 2016], moment accountants [Abadi et al. 2016], or Renyi DP accountant [Mironov et al. 2017] may yield slightly better bounds on $\varepsilon$. Indeed, in our experiments, we accounted for the privacy budgets via Renyi DP, which we believe offers a tighter bound than our theoretical (order-wise optimal) bounds.
Regarding the $\ell_2$ mean estimation in practice, we opted not to use Kashin's representation and instead perform random rotation with $\ell_\infty$ clipping. Although the $\ell_\infty$ clipping step introduces a small amount of bias, we believe it offers a better privacy-MSE trade-off in practice. One may be able to directly analyze the privacy of $\ell_2$ mean estimation with coordinate-sampling (without the Kashin's representation step), but currently it remains unclear.
[Dwork et al. 2010] Boosting and differential privacy
[Kairouz et al. 2016] The composition theorem for differential privacy
[Abadi et al. 2016] Deep learning with differential privacy
[Mironov et al. 2017] Rényi differential privacy
- Note that in the context of standard FL, it is not guaranteed that each local gradient or model update exhibits sparsity. As a result, additional sparsification procedures must be explicitly introduced to effectively mitigate communication costs. In the work of [55], this sparsification process involves the random selection of $k$ coordinates from a set of $d$ coordinates. Our scheme, the Coordinate Subsampled Gaussian Mechanism, on the other hand, can also be viewed as sparsification (as mentioned earlier). However, our primary contribution lies in demonstrating how this sparsification can be used to "amplify" central DP. In comparison to [Hu et al.], our scheme injects isotropic Gaussian noise after local sparsification and aggregation, which enables us to apply the amplification lemma via subsampling.
Lastly, we appreciate the reviewer's attention to the typos and citation inconsistency, and we will promptly address those issues.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thanks for your rebuttal. I still would like to ask about the last point. You replied:
> Note that in the context of standard FL, it is not guaranteed that each local gradient or model update exhibits sparsity. As a result, additional sparsification procedures must be explicitly introduced to effectively mitigate communication costs. In the work of [55], this sparsification process involves the random selection of coordinates from a set of coordinates. Our scheme, the Coordinate Subsampled Gaussian Mechanism, on the other hand, can also be viewed as sparsification (as mentioned earlier). However, our primary contribution lies in demonstrating how this sparsification can be used to "amplify" central DP. In comparison to [Hu et al.], our scheme injects isotropic Gaussian noise after local sparsification and aggregation, which enables us to apply the amplification lemma via subsampling.
My point was that wouldn't you need to sparsity every individual data-sample wise gradient if you want to have the privacy amplification from the random sparsification? If a user has several gradients, and sparsities, e.g., the sum, then you would not get the desired amplification for the central guarantee, and on the other hand, if you sparsity individual gradients and sum, you quickly loose the sparsity?
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer's follow-up questions
Comment: Thank you for your prompt response and for providing additional clarification regarding your question. In the context of FL when each user has multiple data samples, the vector $x_i$ in our distributed mean estimation (DME) problem formulation can be taken as the mean local gradient computed at user $i$ (i.e., the average of sample-level gradients computed over $m$ samples at client $i$ for some arbitrary $m$). In other words, our Coordinate Subsampled Gaussian Mechanism and consequently the sampling-and-amplification lemma (Theorem 2) will be applied to the *average* gradient computed at each user and will achieve the order-optimal MSE. Note that the privacy amplification comes from the fact that we randomly sparsify the final vector $x_i$ each user aims to communicate to the server (i.e. a random subset of the coordinates of $x_i$ are communicated to the server). The fact that $x_i$ represents the *average* local gradient at each user does not change the privacy amplification.
We believe the reviewer's question also relates to the the distinction between *user-level* DP (where two neighboring datasets differ in one *user*, which may hold $m$ samples/gradients) and *item-level* DP (where two neighboring datasets differ in one *sample*). Note that when applied to the average user-level gradient as described above our scheme leads to user-level DP, which is stronger than item-level DP. We provide a more mathematical argument below.
Mathematically, under the multi-sample setting, each of the $n$ clients holds $m$ samples and each sample is associated with a (sample-level) gradient $g_{i, j}$, where $i \in [n]$ is the client index and $j \in [m]$ is the sample index. In each round of training, the server aims to estimate $\bar{g} = \frac{1}{nm}\sum_{i=1}^n\sum_{j=1}^m g_{i, j}$.
Since we focus on *user-level* DP, the $L_2$ sensitivity of $\bar{g}$ is defined as
$$ \max\_{(g\_{i,1},...,g\_{i,m}), (g'\_{i,1},...,g'\_{i,m})} \left\Vert \frac{1}{nm}\sum_{j=1}^m g_{i, j} - \frac{1}{nm}\sum_{j=1}^m g'_{i, j} \right\Vert_2 = \frac{2C}{n}, $$
where we assume each local gradient is clipped to $C$ ahead of time (note that the $\ell_2$ clipping is not needed if the loss function is $C$-Lipschitz).
If there is no communication constraint, one can apply the standard Gaussian mechanism and achieve a MSE $\mathbb{E}\left[ \left\Vert\bar{g} - \bar{g}_{\text{gauss}} \right\Vert^2_2 \right] = O\left(\frac{C^2d}{n^2\varepsilon^2}\right)$.
On the other hand, to further reduce communication cost, one can also apply our coordinate-sampling algorithm CGSM (Algorithm 2 in our paper) or the shuffled-SQKR scheme (Algorithm 4) to user-level gradients $g\_i := \frac{1}{m}\sum\_{j=1}^m g\_{i, j}$, yielding an $\tilde{O}\left(\frac{C^2d}{n^2\varepsilon^2}\right)$ MSE as well as $\tilde{O}(n\varepsilon^2)$ bits of communication. This means that only user-level subsampling is necessary, and as mentioned by the reviewer, item-level sampling does not help as it does not necessarily sparsify the user-level gradient.
We note that [55] considers item-level DP (see the derivation of Theorem 1 therein) as opposed to user-level DP as described above. Although their framework could be adapted to user-level DP, such a modification will still fail to leverage the randomness introduced during the sampling step, thus lead to suboptimal communication costs.
As a side note, while our current discussion assumes full client participation in every training round with all local samples, our analysis extends to scenarios where clients are initially sampled by the server or where each client computes their local updates on a local mini-batch. By applying standard privacy amplification by subsampling [Balle et al. 2018], one can obtain an optimal privacy-accuracy trade-off.
Finally, we thank the reviewer for bringing this up and we plan to briefly mention the application to FL with multiple-samples per user in the revision as discussed above. We look forward to further discussions and input.
[Balle et al. 2018] Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences | Summary: This paper studies the federated learning problem within the central differential privacy model, where a trusted central server gathers information from local clients. The primary focus is on minimizing communication costs while ensuring privacy and maintaining accuracy guarantees. To address this challenge, the paper introduces a novel privacy amplification framework. The framework operates by having each local client transmit only a random subsample of its full parameters to the central server, effectively reducing the communication overhead. Simultaneously, the randomness introduced in the subsampling process amplifies the privacy guarantee. Notably, in the specific case of distributed mean estimation, this framework significantly reduces the communication cost from an order of $O(d)$ to $\tilde O(n\min\\{\epsilon, \epsilon^2\\})$, showing a significant improvement over state-of-art result.
Strengths: This paper provides a novel privacy amplification method in central DP setting that optimizes the communication-privacy-accuracy three-way trade-off. Furthermore, the proposed distributed mean estimation also has significant implications. For example, upon setting $x_i$ to be local gradients, this algorithm immediately implies a private distributed SGD algorithm with reduced communication cost.
Weaknesses: The privacy amplification method is restricted to the central DP model and does not apply to LDP model.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of the novelty and significance of our techniques in private SGD and federated learning. As the reviwer notes, our schemes do not amplify local DP since the randomness used for amplification (i.e., random seeds for compression/subsampling) must be known by both the client and the server. We note that this is also true for other amplification technologies, such as privacy amplification by shuffling or subsampling, which are primarily designed for central DP. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable questions, insightful suggestions, and constructive feedback. We will fix all typos/minor issues in the revision and have provided responses to concerns raised by each reviewer below. We would greatly appreciate if the reviewers could consider updating the scores if the concerns are addressed in our response. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean Estimation and Linear Regression | Accept (poster) | Summary: In this paper, the authors give an improved algorithm for estimating the mean of a multivarite Gaussian random variable under the Huber contamination model. Diakonikolas et al. Have given the first algorithm in this model that achieves the information theoretical optimal $\ell_2$ loss of $\Theta(\varepsilon)$ using a polynomial number of samples when the noise rate is $\varepsilon$. Pensia et al. have given a fast streaming algorithm for the same problem with an $\ell_2$ loss of $\Theta(\varepsilon\log \varepsilon^{-1})$ with a sample complexity of $d^2\varepsilon^{-2}$ where $d$ is the dimension. The authors carefully combine the ideas in the above two papers to achieve the sample complexity of $d\varepsilon^{-1}$ which is optimal.
At a high level, a mean vector $\mu’$ and a subspace $V$ is first found such that $\mu’$ and $\mu$ are close in $\ell_2$ norm in $V^{\bot}$. Moreover, $V$ has dimension at most $\mathrm{\poly}\log(d\varepsilon^{-1})$ such that a high sample-complexity inspired from DIakonikolas et al. is affordable. The finding of this subspace $V$ uses ideas from the Pensia et al. Paper by looking at a random direction where the eigenvalue is large as opposed to looking at the top k eigenvalues as in the Diakonikolas paper which incurse the worse loss.
The paper also gives a linear-sample algorithm for robust Gaussian linear regression by reducing it to the robust mean estimation problem of a certain conditional distribution of the multivariate Gaussian. This algorithm is slightly surprising to me as any continuous distribution has zero probability of exactly seeing a particular item in the sample space and therefore conditional sampling in infeasible. However, the authors manage to achieve their reduction by carefully considering a large enough ball that allows their reduction to go through.
I have not carefully checked the mathematical details of the paper.
Strengths: The robust statistics in general and the multivariate robust mean estimation, in particular, have received wide attention recently. Therefore, the improved sample-optimal algorithm derived in this paper should be very interesting to the community.
Weaknesses: None.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A question to the authors: all the papers in this line of work achieves a recovery rate of say $\varepsilon/1000$ which looks quite lossy to me. Univarite robust estimation achives close to $\varepsilon$ rate, or smaller constants as do several other papers in statistics such as: [arxiv:2002.01432]. So, my question to the authors is: is there any hope that the constant can be improved, or this filtering technique has some inherent barrier?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments and feedback.
**(Constants in the breakdown point)** We would like to point out that constants could indeed be optimized, but that work falls outside of the primary aim of our paper. The filtering techniques that we use and their variants have been used to get better constants and breakdown points. In particular, [Dia+18] already takes special care to improve the constant in the error guarantee. Moreover, [arxiv:2002.01432] in its core uses similar filtering techniques to obtain better breakdown points. Finally, optimal break-down point can be achieved using filtering-based algorithms, see, for example, [ZJS22, , Exercise 2.10 of DK23, Appendix E of HLZ20]. As the primary focus on this paper is demonstrating that the nearly-linear runtime is possible, we did not optimize constants. | Summary: The authors study here a classical problem in statistics: the estimation of Gaussian means and linear regression with Gaussian covariates in the presence of Huber contamination.
This paper proposes a novel polylog time algorithm to compute the mean of a Gaussian in an $\epsilon$-corrupted, high-dimensional setting. This algorithm is obtained as a concatenation of two algorithms presented in the literature. They apply this algorithm in Gaussian linear regression and analyze this setting.
Strengths: The paper seems technically sound and addresses some relevant questions in the field. The approach is based on an algorithm previously known in the literature [Dia+18], enhancing it with ideas from [DKPP22]
Weaknesses: While the paper provides a solid theoretical analysis of the time and sample complexity for these algorithms to work, it could be more concerned with seeing it applied. A section could have been dedicated to applying the algorithm in some cases where the parameter values are relevant to some applications.
Additionally, the work is focused on proving that the algorithm performs as claimed, but it needs to address the limitation of the analysis of the algorithm. The conditions and assumptions are not clearly stated in the introduction of the problem.
The first part of the paper is straightforward and gives a clear introduction to understanding the questions addressed by the paper. It is pedagogical to explain the position in the current literature. The technical part of explaining the algorithm (specifically Section 3.2) takes more work to follow.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I believe actual experimentation is lacking: Is it possible to test the algorithm and see its performance for some distribution choices? While it is clear that the distribution is Gaussian, are there some constraints on the other distribution?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper is theoretical is nature, so there are no limit
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback. We respond to the individual points below:
* **(Simulations)** We refer to the response given to reviewer rxyU regarding the same matter.
* **(``While it is clear that the distribution is Gaussian, are there some constraints on the other distribution?’’)** We are not entirely sure which is the other distribution that the reviewer is referring to. If the reviewer asks if there are any constraints on the distribution of the outliers, then the answer is no: The outliers could be sampled from any (arbitrary) distribution. If the the reviewer is asking whether the inlier distribution should necessarily be Gaussian, then we provide the response below:
* **(Distributional choices of inliers)** Regarding testing the algorithm for different distribution choices of inliers, we would like to point out that the Gaussianity assumption is somewhat critical for achieving error $O(\epsilon)$. Even for univariate sub-Gaussian distributions, the adversary can corrupt the distribution (in the Huber contamination model) such that every algorithm incurs error $\Omega(\epsilon \sqrt{\log(1/\epsilon)})$; see [DK19, page 4].
* **(Possible extensions)** As outlined above, it is impossible to relax the Gaussianity assumption to general sub-gaussian distributions for inliers. Nonetheless, we can extend our theoretical results to inliers following subgaussian distributions which are centrally symmetric (to ensure correctness of median) and satisfy Hanson-Wright inequality (to ensure that multi-directional filter works).
* **(``The conditions and assumptions are not clearly stated in the introduction of the problem’’)** The introduction provides an informal overview of the problem, and Section 1.1. provides complete formal statements for the two results with all conditions and assumptions stated. We are not sure which are the missing conditions that the reviewer is referring to.
* **(``Position in the current literature’’)** This is mentioned in the related work section; cf. Lines 144-151. Given the extra space in the camera ready version, we could mention it earlier in the paper as well. In summary, the paper is positioned as follows:
* For subgaussian distributions, there are existing nearly-linear runtime robust mean estimation algorithms getting the information-theoretic optimal error of $O(\epsilon \sqrt{\log(1/\epsilon)})$ in the strong contamination model.
* For Gaussian distributions, these algorithms obviously continue to work, but the information-theoretic optimal error is now $O(\epsilon)$ and existing Statistical Query lower bounds show that achieving that error is computationally hard under the strong (or even TV-distance) contamination model. Thus, one needs to restrict to the Huber contamination model.
* For Gaussians under Huber contamination, [Dia+18] gave the first polynomial time-and-sample algorithm achieving the optimal $O(\epsilon)$ error. In the present paper, we improve the runtime and sample complexity to be nearly linear. Additionally, we also obtain similar guarantees for robust linear-regression, where prior to our results, not even polynomial algorithms were known for $O(\sigma \epsilon)$ error. | Summary: Gaussian estimation and linear regression are fundamental statistical tasks, and the Huber contamination model is one of the most well-studied models for robust statistics. This paper works to design algorithms for these tasks with near-optimal sample complexity, error, and almost linear running time. The main result is for Gaussian robust mean estimation with near-optimal sample complexity of $\tilde{O}(d/\varepsilon^2)$, almost linear running time, and near-optimal $\ell_2$ error of $O(\varepsilon)$. (Previously there have only been algorithms with (i) near-optimal error but polynomially-suboptimal sample complexity and running time, or (ii) near-linear runtime but suboptimal error.) The paper obtains similar guarantees for robust Gaussian linear regression with Huber contamination: resolving the open problem of whether there exists an algorithm with $O(\sigma \varepsilon)$ error in polynomial time and sample complexity. Applicability to streaming is also discussed.
Strengths: The strengths will read similarly to the summary, in that the obtained results are for fundamental problems that mostly speak for themselves. The tasks of Gaussian mean estimation and linear regression with Huber contamination are core to algorithmic robust statistics, and it is highly desirable to have algorithms with near-optimal error, sample complexity, and near-linear running time simultaneously.
The main result of this paper is for Gaussian mean estimation with Huber contamination. Grossly oversimplifying, the result works by leveraging ideas related to [DKPP22] and [Dia+18]. [DKPP22] obtains near-linear time but suboptimal error guarantees. It obtains a fast running time by techniques with which it filters in random directions of large variance. [Dia+18] obtains near-optimal error with slower running time by employing a stronger filtering technique that deterministically filters with respect to the subspace for the k-largest eigenvalues. These techniques are not obvious how to use together, as trying to sample for random, strong filtering directions may just consistently produce the same directions, and deterministic filtering will naturally incur a slowdown. In summary, this work’s primary technical contribution is showing how to obtain the guarantees of strong filtering techniques without incurring the slower runtime that is naively associated with them.
Weaknesses: One could argue that the very high-level approach for the main result of mean estimation is not too surprising in how it blends the intuitions of [Dia+18] and [DKPP22]. Still, there are significant technical obstacles to this approach and it is perhaps even nicer to see how techniques from this general body of work are flexible enough to coalesce into a new, important result.
Providing some discussion on the relationship to settings with non-identity covariance might be beneficial.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you provide discussion on implications for more general covariances?
"reminder" -> "remainder" line 233
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The limitations are appropriately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation. We respond to their question about general covariance matrices below.
First, we can restrict to the case that the covariance matrix is unknown to the algorithm; otherwise, using an appropriate whitening of the data can reduce to the identity covariance case. For this case, we note the following:
**(Information-theoretic Error)** For unknown covariance Gaussians, the information theoretic optimal for the Euclidean error is $\Theta(\epsilon \sqrt{\lVert \Sigma \rVert_{op}} )$. Moreover, this can be achieved with linear in $d$ sample complexity by (computationally-inefficient) algorithms. See, e.g., [DK23, Lemma 1.9 and Proposition 1.20].
**(Computationally-Efficient Error)** However, any computationally-efficient (statistical query) algorithm that achieves error $o( \sqrt{\epsilon} \sqrt{\lVert \Sigma \rVert_{op}} )$ requires $\Omega(d^2)$ samples [Dia+22a, Theorem 6.11 with $k = \sqrt{d}$]. With $d^2$ samples, one can robustly learn the covariance (without the extraneous $\log(1/\epsilon)$ factor) using [Dia+18] and reduce it to the known covariance setting. Crucially, the algorithm in [Dia+18] runs in quasipolynomial time and speeding up this algorithm is an important problem but beyond the scope of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I have no more questions. | Summary: This paper presents an algorithm to estimate the mean of a Huber contaminated d-dimensional Gaussian with sample complexity of n=O(d/\epsilon^2) and a run time that is near-linear. They also provide an algorithm for robust linear regression with sample complexity of n=O(d/\epsilon^2) and a run time that is again near-linear. The results improve upon the existing poly(d/\epsilon) algorithms that achieve \Theta(\epsilon) error.
Strengths: The paper is well-written and the flow is good. Although the paper is highly theoretical, the writing is friendly.
The results of the paper seem to answer an important open question and the paper claims to achieve a sample complexity that is close the the mean estimation problem in the uncontaminated case, which is good.
Weaknesses: The paper seems to combine the techniques present in two existing papers to obtain their result. Although the authors clarify that such a combination is highly non-trivial, the math used in the proof section is very dense for a thorough verification of this claim.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Appendix B (which forms the core of the proof for theorem 1.3), what is the core insight that helps obtain the final sample complexity? The paper claims that combining the techniques from [DKPP22] and [Dia+18] is highly non-trivial. However, The core proofs in sections B.4.1 do not contain any novel proof techniques apart from the ones already present in [DK19] and [Dia+16]. If you could offer more insight into this in the main paper, that would be good.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: This is a theoretical paper with no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive feedback on the contributions and the writing.
However, it appears that the reviewer touched upon and subtly intertwined two distinct aspects of the algorithm:
(i) combining the algorithmic techniques used in [DKPP22, Dia+18] that is developed in Section 3, and
(ii) the sample complexity for ensuring the deterministic conditions (discussed in Appendix B.4.1 and its relation to [DK19, Dia+16]).
Specifically, we would like to highlight that (i) is the main contribution of our work and forms the core of Theorem 1.3. Thus, to minimize any potential confusion, we would like to provide clarifications on both of these points below.
**(Algorithmic techniques from [DKPP22, Dia+18])** Our main contribution is achieving the optimal error in nearly-linear runtime, which requires a non-trivial combination of the algorithm components from [DKPP22, Dia+18]. Combining the techniques from [DKPP22, Dia+18] is the main focus of our paper which we argue that is non-trivial, both from an algorithmic and an analytical viewpoint. We highlight the roadblocks in combining these techniques in Lines 116-121. Our solution is then outlined in Lines 122-134, with the details presented in Section 3. We stress again that this solution is the main contribution of our paper (and orthogonal to [Dia+16,DK19]).
**(Sample complexity for deterministic conditions)** The proposed algorithm above succeeds under a set of (novel) deterministic conditions (given in Definition 2.1). The last of these conditions (Condition 2.c), which is the key to obtaining the optimal error, is novel, and thus requires a new proof, provided in Appendix B.4.1 (only the conditions 2.a and 2.b come from [DK19, Dia+16]). In addition, we show that Condition 2.c holds with linear sample complexity, which should be contrasted with the (large) polynomial sample complexity in [Dia+18] for the corresponding condition there. That said, we stress again that this is not the main result/focus of our paper (and is thus deferred to Appendix). The reviewer is also asking about the core insight for linear sample complexity of Definition 2.1, which we summarize next:
**(Core insight for linear sample complexity of Definition 2.1)** We focus our attention to Condition 2.c, which is the novel deterministic condition, and provide a very high level intuition for why its sample complexity should be (nearly) linear. Observe that this condition asks for a uniform concentration along all matrices $U$ of dimension $d \times k$, which are objects of $dk$ parameters. Since $k$ is logarithmic in $d$ and $1/\epsilon$, the total number of parameters is nearly linear in $d$. Informally, as the number of parameters of the objects that we ask uniform concentration for is roughly related to the sample complexity, this eventually results in a nearly linear sample complexity. (More formally, we construct a finite cover for these matrices whose size is exponential with the exponent being nearly linear. Since Gaussians have exponential concentration, the resulting sample complexity is logarithmic in the cover size, and thus nearly linear.)
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification, I do not have any further comments. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and effort in providing feedback. We are encouraged by the positive comments, and that all the reviewers appreciated the paper for the following: (i) **novelty** (rxyU, BEmV, qzWP), **importance** (rxyU,bgYs,BEmV, 7D1F) (iii) **good** and **friendly** writing (bgYs), and (iv) **surprising results** (7D1F). We address the individual questions and comments by the reviewers separately. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper solves an important problem in robust mean estimation and robust linear regression, that is, can we achieve optimal error rate with nearly linear sample size and runtime when the inliers are standard Gaussian. All the previous works that have linear runtime suffer from an additional $\sqrt{\log(1/\epsilon)}$ factor. To derive such results, they develop a new analysis for the classic filtering technique. Moreover, they develop a novel reduction for the Gaussian linear regression to mean estimation problem, which is also of independent interest.
Strengths: - This study solves a long-standing problem in the robust mean estimation and robust linear regression, that is, can we close the gap between the information-theoretic lower bound and the guarantees for algorithms that enjoy linear runtime. This problem, though is nearly the simplest setting, is very fundamental in this field, and I feel like the analysis developed in this paper can be adapted to other tasks.
- The proof technique is very strong. The reduction for Gaussian linear regression is quite novel.
- I like the intuition illustrated in Section 1.2.
Weaknesses: - In most of the literature listed in this paper, people consider the stronger outlier model, i.e., the strong contamination model. My question is, why the authors can only handle Huber's model? What is the main difficulty in extending to the strong contamination model?
- Some notations are not consistent: the pdf function is denoted by both $P(x)$ (line 154) and $p(x)$ (line 173).
- It would be great if there are some preliminary simulations to support the theory, like plotting the runtime v.s. dimension.
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort and feedback.
Regarding the summary of the paper, we would like to highlight that the algorithm for linear regression is in fact the *first polynomial algorithm* obtaining error $O(\sigma \epsilon)$, i.e., not only attains nearly-linear runtime and nearly-optimal sample complexity, but prior to this, no polynomial-time and sample complexity algorithm was known to achieve the $O(\sigma \epsilon)$ error (cf. lines 82-83).
We now proceed by addressing the reviewer’s questions below:
**(Huber’s vs strong contamination)** There is an inherent difference in the guarantees that can be obtained between the two contamination models. While the information theoretic optimal error is $\Theta(\epsilon)$ for both contamination models (and achievable by exponential time algorithms with linear sample complexity), there is a statistical query lower bound providing evidence that it is computationally hard to reach error $o(\epsilon\sqrt{\log(1/\epsilon)})$ in the strong contamination model (or even total variation model); see lines 146-148 in the introduction. This computational lower bound necessitates that we consider the simpler Huber contamination.
**(Notation)** Throughout the paper, we have consistently used capital letters (e.g., $P(x)$) to denote probability density functions (pdfs). The instance $p(x)$ mentioned in line 173 does not represent a pdf; instead, it denotes the polynomial defined in the previous line.
**(Simulations)** While we acknowledge the reviewer's interest in experiments and the importance of developing practical algorithms, the primary contribution of our work is to complete the computational and statistical landscape for the problem in terms of error guarantee, sample complexity and runtime. As NeurIPS invites theoretical ML papers (such as ours), we believe that our work should not be negatively evaluated for the lack of experiments, especially when no other nearly-linear time estimator with optimal guarantees was previously known.
In the light of these responses, we kindly request the reviewer to reconsider their score.
---
Rebuttal Comment 1.1:
Title: Thanks for your reply
Comment: Thanks for your reply, which addresses my concern. | null | null | null | null | null | null |
GPEX, A Framework For Interpreting Artificial Neural Networks | Accept (poster) | Summary: This paper uses Gaussian Processes trained to match Neural Networks to subsequently "explain" the NN outputs by finding the nearest neighbors to any given test point in the training samples. An evidence lower-bound is derived that encourages the GP’s posterior to match the NN’s output. Scalability is obviously an issue for GPs. The authors use GPU acceleration techniques to enable training GPs with O(100k) inducing points. Various example experiments are shown where the 10 nearest-neighbors in training illuminate why the NN responds the way it does to a test example.
Strengths: The method seems straightforward and intuitive. The GP matching is less aggressive where the uncertainty is large (far from any training points). The examples shown seem to demonstrate that the method appears to work on these examples.
Weaknesses: I didn't find any mention about using the GP error bands in the explanations. Presumably the explanations are not as good in regions where the GP and NN are less closely matched, which should be the case where GP error bands are large since the method doesn't weight these regions as much in the loss. I.e., far from any training points, it's not obvious that the nearest neighbors chosen by the GP kernels are really driving the NN response. This isn't discussed at all as far as I can see.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Have you looked at examples in regions where the GP error band is large?
Assuming it's true that explanations there are less reliable, can you return info using the size of the GP error at x_test that flags this for the user?
It's nice that you can interrogate the NN using your GP to explain specific examples, but it's not obvious (to me) how one would systematically study a NN in this way, since a human can only visually inspect a small number of examples as is done in the paper. Any comments / thoughts on this?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The Limitations paragraph is super concise (presumably due to space constraints), but it'd be good to expand it a bit and make it more clear. E.g., it talks about how the method runs well with O(1M) points, but that the GP fails to match the NN. As written this is hard to interpret.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see the attached global pdf.
---
Rebuttal Comment 1.1:
Comment: Thank you for your technical comments.
=== "it's not obvious (to me) how one would systematically study a NN in this way, since a human can only visually inspect a small number of examples as is done in the paper. Any comments / thoughts on this?"
- It may so happen that in the GP's kernel-space, the test instance is equally close to many training instances (which seem to be the case for, e.g., rows 1-5 of Fig. 2a). Interestingly, for some test examples we see a different behavior as follows: $x_{test}$ is classified by finding one or a few training samples which are very similar (almost identical) to the test instance. As if the model "memorizes" the entire training set and when it sees, for example, the white dog in row 8 of Fig. S43 (in the supplementary) intuitively the model thinks it is very similar to one of the training instances that it has seen before (i.e. the very similar white dog in the 8th row and 2nd column of Fig. S43), therefore it classifies the image as dog. We see this behavior in many cases: rows 6 and 15 of Fig. 2(a), row 8 of Fig. S43, row 7 of Fig. S42, row 7 of Fig. S44, rows 7 and 8 of Fig. S48, and row 2 of Fig. S49. This "memorization behavior" and its connection to generalization has received recent research interest [43, 44, 45] and is clearly observable in our GP's explanations.
- Further quantitative experiments are needed to analyze the number of nearest neighbors which have a significant effect, and to support for example the above intuitive discussion. We have implemented the proposed GPEX as a tool (with all inference formula's happening under the hood) so other researchers can answer this and many other interesting questions. In the paper (in particular the supplementary material) we have analyzed many factors: the effect of the last layer's width, the number of inducing points, the number of epochs, etc, which has already made the article too long. Moreover, due to limited rebuttal time we could only perform some of the experiments suggested by reviewers. So we suggest this and many other interesting questions can be answered in future researches and by using our method/tool.
=== "Have you looked at examples in regions where the GP error band is large? "
We have analyzed many factors in the supplementary material (which is already long) and due to limited rebuttal time we could only perform some of the experiments. So the use of GP's uncertainty can be analyzed in future researches and by our tool. In particular, uncertainty quantification methods like Monte-Carlo dropout [7] have known issues, and GP's uncertainty may be a better choice. As a very preliminary experiment, we tried to relate GP's uncertainty to neural network's failures with the hope that for misclassified cases the GP shows a higher uncertainty. But at least in that preliminary experiment we failed to do so, that's why we didn't include that experiment in the paper.
=== "The Limitations paragraph is super concise (presumably due to space constraints), but it'd be good to expand it a bit and make it more clear. E.g., it talks about how the method runs well with O(1M) points, but that the GP fails to match the NN. As written this is hard to interpret."
Thank you for this comment. In the final version we will definitely consider this point. In particular, the page limit of the final version "may" be extended to 10 pages (maybe like previous year) which gives us even more space to accommodate the comments.
[43] Feldman, Vitaly, and Chiyuan Zhang. "What neural networks memorize and why: Discovering the long tail via influence estimation." Advances in Neural Information Processing Systems 33 (2020): 2881-2891.
[44] Adam Pearce, Asma Ghandeharioun, et al. "Do Machine Learning Models Memorize or Generalize?". website: https://pair.withgoogle.com/explorables/grokking/
[45] Mahajan, Divyat, Shruti Tople, and Amit Sharma. "The Connection between Out-of-Distribution Generalization and Privacy of ML Models." arXiv preprint arXiv:2110.03369 (2021).
[7] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep 366 learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016 | Summary: This article proposes a new training approach for Gaussian Processes (GPs) which encourages their posterior to match a given Artificial Neural Network (ANN) output. Their approach adopt a scheme that permits a scalable training, which is an issue when it usually comes to GPs, and that is motivated by the derivation of an evidence lower bound (ELBO) encouraging the faithfulness of the GP to the target ANN. The authors then explain the decisions made by an ANN with the help of a matching GP with two different methods : explanations by similar examples (similar in the kernel space of the GP), and using saliency map to determine why some examples where similar.
Strengths: 1. It is, to the knowledge of the authors, truly original to use a GP in order to explain a matching ANN.
2. The approach for training GPs tackles a main issue of traditional approaches : scalability with regards to the number of inducing points.
Weaknesses: _Major_
1. My first concern, which could probably summarise many flags I have to raise, for it leads to several problems : the article is too dense. One consequence is that there are too many things putted in Supplementary material that are necessary to understand the article in its whole. For example, the way the CAM-like explanations (line 282) are computed is not even slightly described in the main article; or, « According to the experiments and detailed discussions of Sec. S6 in the supplementary, the GP’s kernel that we find in this paper is superior due to a technical point in the formulation of representer point selection » (lines 299-300); or the fact that the derived ELBO is a contribution in itself, and isn’t even presented in the main paper, only pieces of it. The Supplementary material should not contain information that are vital to understanding the methods, the idea and the approaches, but further readings.
2. When manipulating concepts such as explainability and interpretability (which seem, in the article, to be used interchangeably; lines 20-21 : « After matching the GPs to the ANNs, we used the GPs’ kernel functions to explain the ANNs’ decisions. »; the title says « interpreting artificial neural networks », and on lines 75-76 : « GPEX can be used by machine learning researchers to interpret [...] their artificial neural networks. »), since they do not have a clear and universal definition, it is important to state what these terms refer to in the article, just like, in Section 2.1, some notations are defined. This lack of clearness makes it harder to grasp the goal of the article.
3. If I understand correctly, as for the explainability approach, the way the nearest neighbours are found is by looking at the nearest neighbours, but in the kernel space. How does that differ from only computing the nearest neighbours in the output space of the network? How does that makes the neighbours that are found more relevant? Both the kernel and the network can be seen as an embedding. Plus, the GP's job is to imitate the network. It seems to me like this approach combines two explainability approaches : finding a « more interpretable » predictor imitating the predicting behaviour of the former, and finding examples that are similar to one another. Why is combining those two approaches, both having their flaws, would lead to any better results when it comes to explainations?
4.1. I find the title misleading; explaining (interpreting, as the title says) ANNs would seem to be the main point of the article, but in reality, explaining ANNs concerns only page 8 and half of the last page, whereas training a GPs faithfully to a trained ANN really is the main point of the article. Plus, both the saliency map and looking at the most similar examples (which corresponds to approximate the decision function of the network by a more interpretable predictor : a k-nearest neighbours predictor) from the training set are not novel ideas.
4.2. As for the related works, explaining using GPs is only discussed on ~3 lines, citing a single work.This might be the first time an explanation is based on a bridge between GPs and ANNs, but the authors stated that « Gaussian processes are highly interpretable » (lines 37-38) and that « The analogy between Gaussian processes (GPs) and deep artificial neural networks (ANNs) has received a lot of interest, and has shown promise to unbox the blackbox of deep ANNs » (lines 1-3); it is therefore expected to see « what kind of interest they have received », especially in the discussion / related work sections of the article.
5. One of the contribution is that the proposed approach is more scalable when compared to previous approaches. Without any empirical comparison with state-of-the-art for training GPs, both in terms in training time and performances, it is hard to state whether the proposed approach works particularly well or if it is just have effective as what is found in the literature. That is also why the experiments ran in Section 4.1 could be misleading. Since no benchmark is used to compare the proposed approach to, how can one know if the obtained results are good or not? This lack of information makes it hard to state whether or not the results are positive.
6. I found some claims to be vague, doubtful or at least are highly debatable, but most importantly sometimes not supported at all; lines 37-38 : « Gaussian processes are highly interpretable. » How are they more interpretable? In what way? Doesn’t that depend on the kernel that is used? The same goes for lines 207-208 : « Because according to theoretical results on GP-ANN analogy, the second last layer of ANN should be wide. » This might be true, but there is a need to cite some works here. Also : « Our method scales very well, and Alg. 2 runs fine even on imagenet with more than 1M inducing points » (lines 343-344); what’s « fine » mean? (See also Weaknesses – Major – 5)
7. It is said that experiments were conducted on « 5 datasets (4 image datasets, and 1 biological dataset) » (lines 17-18), but nothing in the article is discussed about the biological dataset.
_Typos / Minor_
1. Line 85 : « The number of GPs is equal to the number of $\underline{the}$ outputs from the ANN. »
2. Some nomenclature and terminology is unclear. For example, line 91-92 : « we consider a general feed-forward pipeline that contains an ANN as a submodule »; what’s a feed-forward pipeline? What is a submodule? Figure 1a) somewhat doesn’t help understanding. (See also Weaknesses – Major - 2)
3. Equation 5 : Is there a missing term?: what is the expectation computed with regards to? (?~q)
4. Line 164 : « procecdure »
5. Line 167 : « matrcies »
6. Line 179 : « from linear algebra if follows that »
7. Line 327 : requries
8. Having the Related Work at the end of the article is peculiar; it seems to me like precious information for better understanding the relationship between ANNs and GPs during the reading of the article (and better understanding the relevance of the work) are put at the end of the article.
9. It should be explicitely written that the kernel function $\mathcal{K}(\cdot, \cdot)$ outputs a matrix whose dimensions correspond respectively to the cardinality of the first and the second input, for it is implicit, but not explicit yet important.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. Aren’t those two sentences (« The analogy between Gaussian processes (GPs) and deep artificial neural networks (ANNs) has received a lot of interest, and has shown promise to unbox the blackbox of deep ANNs » (lines 1-3) and the third contribution, « With the best of our knowledge, our work is the first method that performs knowledge distillation between GPs and ANNs. ») contradictory?
2. Can some details be provided concerning the kernel choice $f(\cdot)$?
3. Why choosing Pearson correlation coefficient for comparing GPs and ANNs? Is it what’s often seen in the literature? Why not comparing the coherence of the predictions, for two predictors can have similar prediction function but predict differently most of the time?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors did not discuss the limitations of their explainability approach. Nowadays, many problems with (for example) saliency maps are well-known (not robust to adversarial perturbations [1], simply unreliable [2], etc.), thus one would expect a few words on those matters.
[1] Ghorbani, Amirata, Abubakar Abid, and James Zou. “Interpretation of neural networks is fragile.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019.
[2 ]Kindermans, Pieter-Jan, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. “The (un) reliability of saliency methods.” In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, pp. 267-280. Springer, Cham (2019).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see the attached global pdf.
\textcolor{blue}{
The article is too dense.
}
\\The paper is dense, because it contains two main ideas. 1. Scalability and 2. Knowledge distillation. Without scalability, successful knowledge distillation wouldn't have happened (as underlined in the analysis of Fig. S66). So we had to include both scalability and knowledge distillation in one paper.
\\\textcolor{blue}{
When manipulating concepts such as explainability and interpretability ... This lack of clearness ...
}\\
We added the following paragraph to the paper, that describes how the GP can explain the ANN.
\\\textit{
... After successful distillation, both the ANN and the GP correspond to a single function which is parameterized in two different ways. One is the parameterization by ANN weights, which is not necessarily understandable to humans. The other is the parameterization by GP's posterior which is understandable to humans. ...
}
\\As you mentioned, "interpretability" and "explainability" are used to convey different meanings in the literature. But in this paper we use the terms interchangeably. We added the following to the paper
\\\textit{
... In this paper we used the terms interpretability and explainability interchangeably. ...
}
\\\textcolor{blue}{
How does that differ from only computing the nearest neighbours in the output space of the network?
}\\
If the ANN is a classifier with $C$ classes, in the final layer instances from different classes are supposed to be close to one another. For example all instances of class 0 are near the vector $[1.0, 0.0, ..., 0.0]$, all instances of class 1 are near the vector $[0.0, 1.0, ..., 0.0]$, ..., all instances of class $C$ are near the vector $[0.0, 0.0, ..., 1.0]$. So the final layer is not a good candidate to compute the similarities. But an interesting question is, what is someone computes the similarity using intermediate layers of the neural network?\\
We cannot answer all questions in one paper (the paper is already 80 pages long with supplementary), so we implemented GPEX as a tool that researchers can use to answer these questions. In the paper we added the below paragraph about future directions to answer more questions.
\\\textit{
... In this paper we analyzed the effect of number of inducing points, ... One can use the proposed tool to answer other questions, like, is ... . How the similarities provided by GP kernel compares to the similarities obtained from intermediate layers of the ANN? ...
}
\\\textcolor{blue}{
Explaining ANNs concerns only page 8 and half of the last page, whereas training a GPs faithfully to a trained ANN really is the main point of the article.
}\\
This is because Gaussian process is a white-box model with well-known behaviour. Therefore, the hard part is finding a GP which is equivalent to the ANN and the rest (obtaining explanations by looking at the GP) is straightforward.\\
\\\textcolor{blue}{
As for the related works, explaining using GPs is only discussed on $\approx$3 lines, citing a single work.This m
}\\
Thank you. We extended the related works section by providing more details about SV-DKL [23]. Moreover, we added another method (ref [35] below) to the paper.
\textit{"
... SV-DKL [23] derives a lower-bound for training a GP with a deep kernel. In that method, a grid of inducing points are considered in the kernel-space (like the vectors $\lbrace(\tilde{\akvec{u}}^{(\ell)}_{m}, \tilde{v}^{(\ell)}_m) \rbrace_{m=1}^{M}$ with the notation of this paper).
Afterwards, each input inst acne is firstly mapped to the kernel-space and the output is computed based on similarities to the grid points in the kernel-space. Since the GP posterior is computed via the grid points, SV-DKL [23] is scalable. But unfortunately the number of grid points cannot be increased to above 1000 even for Cifar10 \cite{ds_cifar10} and with a RTX 3090 GPU. Therefore, this may limit the flexiblity of the GP's posterior [32]. ..."
}
\\\textcolor{blue}{
Gaussian processes are highly interpretable. How are they more interpretable?
}\\
Gaussian process is definitely a white-box model.
\\\textcolor{blue}{
Aren’t those two sentences (...) contradictory?
}
In summary,
- sentence 1: GP-ANN analogy has shown promise before.
- sentence 2: we are the first method applying knowledge distillation to achieve GP-ANN anology.
The promise in sentence 1 has been in other ways than knowledge distillation. So there is no contradiction between the two sentences. For example, in neural tangents [20] the neural network is transformed into a GP by putting each and every single layer wide and training on a specific loss (not knowledge distillation).
\\\textcolor{blue}{
Can some details be provided concerning the kernel choice $f(.)$?
}
Thank you. We added the following sentence to the paper to clarify
\\\textit{
... A feature point like $\akvec{x}$ is first mapped to the kernel-space as $\akvec{u}^{(\ell)} = f_{\ell}(\akvec{x})$. Note that the kernel functions $\lbrace f_\ell(.)\rbrace_{\ell=1}^L$ are implemented as separate neural network, or for the sake of efficiency as a single neural network backbone with $L$ different heads.
Afterwards, the GP's posterior on $\akvec{x}$ depends on the kernel similarities between. ...
}
\\\textcolor{blue}{
The authors did not discuss the limitations of their explainability approach.
}
We added the following point to highlight the limitations of the proposed method.
\\\textit{
... . Although the obtained GP-kernel is faithfull globally, the number of nearest neighbours that equally affect the predictions may prevent a human from understanding the expalanations. Moreover, the CAM-like explanations that we obtain might be prone to the known problems of attribution-based methods. ...
}.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: -Can some details be provided concerning the kernel choice? / "Gaussian processes are highly interpretable." How are they more interpretable?
So, if I understand correctly, the kernel function is a neural network? (More details than the 1-2 lines provided in the rebuttal would be necessary in order to explain exactly what kind of network we are talking about.) In the end, this leads to a Gaussian Process that actually isn't transparent, for a complicated transformation of the input (kernel function) is necessary. It is therefore arguable whether or not a step toward a model that is more "interpretable" is made. I could be misunderstanding, but I feel like the argument here is similar to "the last layer of my black-box neural network is a simple linear layer, so my network is a white-box".
-How does that differ from only computing the nearest neighbors in the output space of the network?
I actually meant to compute the similarity before applying a hard-max function at the end of the ANN. After what would probably be a soft-max function. Example with 3 classes: data points with outputs [0.1, 0.3, 0.6] and [0.09, 0.28, 0.63] for the ANN should be quite similar. How does proceeding like that is less interesting than computing the similarity in the kernel space? Considering that the GP, in the end, approximates the ANN, and especially since the kernel function actually also is a neural network (as you mentioned in the rebuttal).
This is joint with the following: Concerning the "future work avenue" that is mentioned in the rebuttal ("But an interesting question is, what is someone computes the similarity using intermediate layers of the neural network?"), I don't see it as a future work avenue, but a question that the authors should be discussing, for it justify the use of the GP for obtaining explanations. Indeed, if the answer to that question is "computing the similarity using intermediate layers of the neural network works just fine", then the relevance of using the GP could be called into question.
-The article is too dense.
I understand that the scope of the article requires many information and details, but this is not a sufficient justification as to why important information (see Weakness - Major - 1) is in the supplementary material.
---
Reply to Comment 1.1.1:
Comment: We hope the following points cover the questions
- GP can ultimately unbox neural networks, as underlined in ref [37]:
"We invite everyone to ... Neural Tangents and help us open the black box of deep learning. "[37]
- GP is a white-box model. This fact has motivated 10 years of research on GP-ANN analogy ([19],[7],[5],[20]).
- Regarding the use of similarities in the output space of neural network, let's say we compute the representations
of a test point $x_{test}$ and training instances in the output space. One CANNOT say $x_{test}$ is classified as such because
it is similar to some training instances in that space, because we don't know if neural network exactly works that way.
But for GP "we know" it uses the similarity function (the kernel function) to make predictions.
So, unlike the NN's output space, from GP it is completely accurate to say that "$x_{test}$ is classified as such because it is similar to $x_{i1}$, ..., $x_{iM}$ in the kernel space".
- Regarding the GP's kernel being a neural network itself:
Although the kernel is a neural network, with the GP it is still completely accurate to say that "$x_{test}$ is classified as such because it is similar to some training instances in that space".
The question comes down to understanding, why the model thinks $x_{test}$ is similar to a training instance.
To answer the latter question, 1. We tailored the CAM idea to our kernel modules which highlight the regions
which have contributed the most to the similarities. 2. In some cases the similarities are highly interpretable to human, even without using the CAM idea.
To see such examples, please refer to Fig. 2. (in particular rows 1 to 5 of Fig. 2-a) or more examples in the supplementary.
[37] https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html
[5] A. G. de G. Matthews, J. Hron, M. Rowland, R. E. Turner, and Z. Ghahramani. Gaussian process behaviour
362 in wide deep neural networks. In International Conference on Learning Representations, 2018.
[7] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep
366 learning. In international conference on machine learning, pages 1050–1059. PMLR, 2016
[19] R. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics. Springer New York, 2012.
[20] R. Novak, L. Xiao, J. Hron, J. Lee, A. Alemi, J. Sohl-dickstein, and S. Schoenholz. Neural tangents: Fast
390 and easy infinite neural networks in python. 2020. | Summary: This paper derive an evidence lower-bound that encourages GP's posterior to match ANN's output without any requirement on ANN. And the uses the GPs' kernel functions to explain the ANNs' decisions.
Strengths: 1) this paper provides a theoretical way for the algorithm
2) implementation is publically available
3) it is good to see that GP output matches ANN well
Weaknesses: 1) The main point of this paper is to explain ANN. However, Figure 2 is the only experiment results that are about explanation, which only contains some sample explanations. There is no comparison to existing explanation methods and there is not quantitative results about the evaluation performance.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see the attached global pdf.
\textcolor{blue}{
The main point of this paper is to explain ANN. However, Figure 2 is the only experiment results that are about explanation, which only contains some sample explanations. There is no comparison to existing explanation methods and there is not quantitative results about the evaluation performance.
}
\\In Sec. 4.3 of the original submission, we compare against reprenter point selection[33] both quantitatively and qualitatively. Moreover, in the global pdf we added comparison to another method called influence function [36].
---
Rebuttal Comment 1.1:
Title: thanks for your rebuttal
Comment: Given the new results, I'd like to increase my score to 5. | Summary: The paper derives an ELBO to end-to-end distill a deep neural
network into a set of Gaussian processes (one per output) with deep neural
network kernels and the full training set used as inducing points.
By using a low output-dimensionality for the feature map neural network,
the computational cost of the inversion/decomposition of the kernel matrix
is kept constant while arbitrarily scaling the number of inducing points.
The paper empirically validates the discrepency between the teacher neural
network and the student DNN-GP on 5 datasets with 2 different models (5 on
ResNet-18, 2 on some custom attention-based architecture).
Two approaches are introduced to attempt to explain the prediction decisions of
the distilled neural networks based on the DNN-GP: through similarity in the
DNN-kernel's feature space, and a pixel-wise contribution based on the
similarity to the nearest neighbors in the DNN's kernel space.
An accompanying framework is provided, which claims to distill any DNN in Pytorch.
Strengths: - The ELBO and distillation process from GP to DNN-GP is a novel contribution.
- Code is provided for all experiments, which is a major strength.
- Important derivations are provided in the supplement.
Weaknesses: - The paper reports on GPs, but is not very verbose in the fact that
DNN-kernels are used, although they seem to be strictly necessary for the
distillation (e.g., the low-dimensional, explicit features space). Prior work
on DNN-GPs is only partly discussed (e.g. [23]).
- The paper claims to introduce a novel approach to scale the number of inducing points.
However, the computational trick comes down to using a low-dimensional, explicit
feature space, leading to a low-rank, high-dimensional noise-free kernel
matrix, which is only useful for low-dimensional explicit feature spaces as
in DNN-GP.
- The DNN-kernels are significantly more complex (resnet50) than the teacher
DNN (ResNet18). This explainability-complexity trade-off is not explicitly
discussed, and seems like a significant limitation.
- The experiments are not presented very well: Figure 2 (in particular a,c,d) is
quite hard to understand without jumping to the respective text passage.
Figures descriptions in general are lacking in detail (i.e., which are
nearest neighbours, and which is the analyzed sample in Figure 2).
Fig 2(b) has very small captions.
- In the experiment in Figure 3: while the classifiers on MNIST/CIFAR10/Kather
are almost completely above 0.95, a PCC of 0.90 and slightly below for the
other datasets and models are not very insignificant, especially given that
these models are distilled in order to be explained. This should be discussed
in more detail.
- The qualitative analysis of the explanations, both based on the
kernel-similarities, which show similarities and a CAM-based approach, is
difficult to interprete. While the samples generally seem to show other
samples from the same class somehow semantically related, these results are
not directly compared to any approach, making it hard to argue in favor of
supporting the explainability claim. The focus should be more on the
experiment in Fig 2b, ideally with more trials and different data sets.
- The explainability approach is only compared to a single baseline, which is
too few.
- While the experiment in Fig 2b presents a quantitative analysis of the
explainability of the method compared to one other baseline, it is only
conducted a single time, on only a single data set. Since the paper claims
explainability of the models, there should be more experiments in order to
support this claim.
- While the paper claims to provide a tool, the linked software framework has
close to no docstrings, provides no documention, features no tests, and does
not provide a plug-and-play package (no setuptools).
It is good practice to publish accompanying code, and the paper goes further
by also including code for the corresponding experimental results.
However, at the reviewed state, the *tool* simply presents code to the
paper, rather than providing the software framework suggested in the
introduction.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - The paper could be improved by being more verbose about using DNN-kernels.
- It might be better for the paper to instead of stretching the
scalability-claim, openly discuss how a small, explicit feature space leads
to a constant rank of the noise-free kernel matrix, creating a situation
where GPs do not scale beyond the explicit feature space's size, thus arguing
for low-dimensional DNN-kernel-spaces.
- Given the increased complexity of the kernel's DNN to the teacher model, a
discussion on the complexity-explainability trade-off would be very interesting.
Further experiments with smaller kernel DNN's in the distillation-process
could also bring insight in this regard.
- For the experiments in Figure 3, comparing to some baseline might help in
understanding the PCC values better. While a PCC of 0.9 sounds good, it would
help to also highlight the differences between student and teacher, although
I do not have a concrete idea how this could be done.
- To put the gained explainability into context, especially in the qualitative
experiments, it would really help to compare to some baselines.
- The quantitative experiment comparing to Representer-Point selection is very
valuable, but would be even more valuable with multiple tries, and on
different data sets. It feels somewhat incomplete.
- The software claim in the beginning of the paper suggests a well-maintained
software framework for the GP model distillation, which does not seem to be true.
I would suggest to soften this claim (i.e., simply provide a footnote stating
"Code for methods/experiments provided at...")
### Minor
- Alg. 2 title is misleading, a better name may be: training loop for GP
distillation.
- Alg. 1 can be moved to the supplement, as it is only a step of
gradient descent for the DNN-kernels and does not add much to the manuscript
- l.126 nominator -> numerator
- figures should be described more in detail, especially the qualitative one
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper discusses some limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please see the attached global pdf.
\textcolor{blue}{
The paper reports on GPs, but is not very verbose in the fact that DNN-kernels are used, ....
}
\\In Sec. 2.2. we added the following sentence
\\\textit{
.Note that the kernel functions $\lbrace f_\ell(.)\rbrace_{\ell=1}^L$ are implemented as separate neural network, or for the sake of efficiency as a single neural network backbone with $L$ different heads.
}
\\\\\textcolor{blue}{
Prior work on DNN-GPs is only partly discussed (e.g. [23]).
}
\\In the related work section, we added the following details about the SV-DKL method
\\\textit{"
... SV-DKL [23] derives a lower-bound for training a GP with a deep kernel. In that method, a grid of inducing points are considered in the kernel-space (like the vectors $\lbrace(\tilde{\akvec{u}}^{(\ell)}_{m}, \tilde{v}^{(\ell)}_m) \rbrace_{m=1}^{M}$ with the notation of this paper).
Afterwards, each input inst acne is firstly mapped to the kernel-space and the output is computed based on similarities to the grid points in the kernel-space. ..."
}
\\\\Besides SV-DKL, in the paper we have cited and discussed GPytorch [25], KISGP [31], and binary-tree kernels [4]. We also added the work of Wilson et. al. (ref. [35] at the end of this document) which actually is based on KISS-GP [31].
\\\\\textcolor{blue}{
The paper claims to introduce a novel approach to scale the number of inducing points. However, the computational trick comes down to ...
}
\\\begin{itemize}
\item The core idea of our method is storing the kernel-space representaitons in the matrix $\mathbf{U}$ and involving only one row of $\mathbf{U}$ in the computation graph (as done in line 6 of Alg. S1). This way of handling GP kernel is completely novel.
\item Also doing knowledge-distillation between ANN and GP is novel.
\item Although the low-rank approximation is a simple idea, it hasn't been applied to GP training before.
\end{itemize}
\textcolor{blue}{
The DNN-kernels are significantly more complex (resnet50)....
}
\\It might be the case that the GP kernel has to have more parameters than the ANN, but this is not a limitation at all. Instead, it is a question to be answered.
Please note that we cannot answer all questions in one paper, and that's why we have implemented GPEX as a publicly available tool with a simple API .
\\We added the following part to the paper.
\\\textit{
... In this paper we analyzed the effect of number of inducing points, .... One can use the proposed tool to answer other questions, like, is the GP kernel required to have more parameters than the ANN itself, as observed in the experiments of this paper? Is the uncertainty provided by the GP correlated with the understandability of the explanations to humans or the ANN's failures? ...
}
\\\\\textcolor{blue}{
The experiments are not presented very well: Figure 2 (in particular a,c,d) is quite hard to understand without jumping to the respective text passage. Figures descriptions in general are lacking in detail (i.e., which are nearest neighbours, and which is the analyzed sample in Figure 2). Fig 2(b) has very small captions.
}
\\Thank you. We added caption to all figures.
\\\\\textcolor{blue}{
The explainability approach is only compared to a single baseline, which is too few.
}
\\We added another baseline "influence-function". For details please refer to Sec. 1 of this document. Note that unlike attribution-based methods, there are not many similarity-based explanation methods. So we compared against representer point selection [33] and influence function [36].
\\\\\textcolor{blue}{
While the paper claims to provide a tool, the linked software framework has close to no docstrings, provides no documention, features no tests, and does not provide a plug-and-play package (no setuptools). It is good practice to publish accompanying code, and the paper goes further by also including code for the corresponding experimental results. However, at the reviewed state, the tool simply presents code to the paper, rather than providing the software framework suggested in the introduction.
}
\\We added documentation via "readthedocs" as well as sample notebooks in the github repository. Please refer to the the anonymous github repo (link available in the submitted paper).
\\\\\textcolor{blue}{
Given the increased complexity of the kernel's DNN to the teacher model, a discussion on the complexity-explainability trade-off would be very interesting. Further experiments with smaller kernel DNN's in the distillation-process could also bring insight in this regard.
}
\\Great suggestion. We have done some parameter analysis on the effect of width of the last layer, number of inducing points, and the number of epochs for which the ANN is trained. Unfortunately we cannot answer all questions in one paper (the paper is already 80 pages long including the supplementary). We added this to future directions
\\\textit{
... In this paper we analyzed the effect of number of inducing points, the width of the second last layer, and the number of epochs for which the ANN is trained. One can use the proposed tool to answer other questions, like, is the GP kernel required to have more parameters than the ANN itself, as observed in the experiments of this paper? Is the uncertainty provided by the GP correlated with the understandability of the explanations to humans or the ANN's failures? ...
}
\\\\\textcolor{blue}{
To put the gained explainability into context, especially in the qualitative experiments, it would really help to compare to some baselines.
}
\\In Sec. S6 of the supplementary we have qualitatively compared our explanations to those of representer point selection [33]. Please refer to Sec. S6 for more details.
\\\\\textcolor{blue}{
l.126 nominator -> numerator
}
\\corrected.
\\\\\textcolor{blue}{
figures should be described more in detail, especially the qualitative one
}
\\Thank you. We added captions to all figures.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the detailed responses and the new results.
The introduction of influence functions make the manuscript a lot stronger, although they look somewhat unstable and may require a few more trials.
I am somewhat unsatisfied about the response to the size of the Kernel-DNNs, but acknowledge the restrictive length of the manuscript and hope a sentence/paragraph will be added.
Thank you for adding documentation and tutorials to the software.
Given the new additions, I feel comfortable to increase my score to 5. | Rebuttal 1:
Rebuttal: the global response
Pdf: /pdf/6df352f572cf66a034ab39a3f47c1240b9d3a015.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Non-adversarial training of Neural SDEs with signature kernel scores | Accept (poster) | Summary: This paper proposes a signature kernel-based Neural SDE by using signature kernel scores. The proposed method eliminates existing state-of-the-art methods' mode collapse and instability using adversarial training.
Strengths: 1. In all experiments, the proposed method outperforms existing methods.
2. The explanation in chapter three is clear and reproducible because the code is also provided.
Weaknesses: 1. This paper's main contribution includes using signature kernel scores for the discriminator, but there needs to be more motivated to use signature kernels.
2. There is no sub-caption for each of Figure 1, so it is unclear which generator it is.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Is that why you chose the signature kernel among the various kernels? Also, if different kernels are adaptable, I would like to know the results of your experiments with other kernels.
2. Which generator does the sub Figure in Figure 1 correspond to?
Minor corrections
3. The unconditional_nsde.ipynb included in the code provided need to be corrected?
```
#cond_ = data_type == "rBergomi"
cond_ = data_type == "gbm"
```
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Contribution**
Please see the response to Reviewer xQxq under the header **Significance**, where we explain that signature kernels allow us to introduce a class of scoring rules for infinite-dimensional spaces of paths, adaptable to spatiotemporal signals, and with strict properness and consistency guarantees.
**Figure 1**
In the preceding paragraph, we explain that the three subfigures are obtained by training Neural SDEs with three different discriminators. We agree with you that the caption should be expanded to clarify which histogram belongs to which discriminator, and we will amend the caption accordingly.
**Minor corrections**
Although this line does not affect the output of the code as it is a commented line, we appreciate that it can be confusing for the reader. The line was left to help guide the user on how to choose a different generation problem. We are happy to remove it.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. All my questions have been answered. I also read other reviews and authors' responses and decided to keep my initial rating. | Summary: In this work, the authors proposed to generate time-series using neural stochastic differential equations using the scoring rules-based training objective which is on signature paths computed from signature kernels. Combining the idea of generative adversarial network, this work also employ the generator-discriminator pair, but solve them as a system of linear PDEs where the adjoint method is applied to reduce the memory cost. Experiments are conducted accordingly on sequential data, including financial datasets.
Strengths: 1. This paper is written in high clarity. All formulations and derivations are clearly elaborated.
2. The idea is promising and novel to combine generative adversarial networks and neural stochastic differential equations, avoiding adversarial training which can cause instability in training.
3. Rough path theory is a relatively new topic in mathematics. It has penetrated in the machine learning community since only 2019. It is interesting to utilize it in time series generation in machine learning.
Weaknesses: 1. In experimental design, the baseline models are not sufficiently explored. The experiments should include diffusion models for time series generation, as they are commonly used generative models nowadays as a very special variant of SDE and would provide a useful comparison to the proposed approach.
2. The space complexity and time complexity seem to raise the concerns, as signature transform is applied where the dimensionality is largely increased.
3. The scale of the datasets in the experiments are of relatively low dimensionality in time series dat. It would be hard to prove the efficiency of the adjoint method applied to reduce the memory cost.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How should the "order" of the signature, k, be selected? Is there any specially designed optimization method to help with it?
2. Why are the neural networks in neural SDE the regular neural networks? I am very curious if more sophisticated deep learning models, such as u-nets, are examined and what their impacts to the performance are.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of this work is not adequately addressed. It can be possible to consider the application scope of this work, where it is only for time series generation, especially for the cases where the dimensionality is not high.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Baselines**
Please see the answer to Reviewer xQxq under the **Significance** and **Empirical evaluation** headers.
**Complexity**
As explained in Section 3.3, our signature kernel scores are evaluated by solving scalar-valued PDEs. Therefore, our approach circumvents the computational challenges associated with calculating high-dimensional truncated signatures and can even be applied to infinite-dimensional cases like our LOB experiments. Consequently, there is no need to select a truncation level. Please also see the response to reviewer xQXq (under the **Significance** header) where we also summarize the other benefits of utilizing signature kernel scores.
**Dimensionality**
While we agree that our Neural SDE model is trained on relatively low-dimensional time series data, we also include an experiment on spatiotemporal LOB processes which are infinite-dimensional. As the dimension escalates and the memory cost becomes prohibitive, we can leverage the adjoint method for training the Neural SDE model.
**Architecture**
We chose to keep the vector fields governing the Neural SDEs as simple MLPs, aligning with the classical architecture from [1]. We appreciate your insight and note that studying different, more sophisticated deep learning models to parametrize the vector fields associated with a given Neural SDE is an interesting research topic.
[1] Kidger, Patrick, James Foster, Xuechen Li, and Terry J. Lyons. "Neural sdes as infinite-dimensional gans." In International conference on machine learning, pp. 5453-5463. PMLR, 2021.
---
Rebuttal Comment 1.1:
Title: On the diffusion models as baselines
Comment: Thank you for your response. It generally answers all my questions, except one point as follows.
In terms of the baselines, your rebuttal to Reviewer xQxq mentioned that diffusion models suffer from the limitations on path generation with invariance to resolutions. The reason lies in the score-based nature of diffusion models. In specific, these models do not possess the canonical Lebesgue measure in infinite dimensions and hence do not have a coherent notion of density.
Would you mind further explaining this reason with more details and less assumption of the audience's mathematical background knowledge? Thanks a lot!
---
Reply to Comment 1.1.1:
Comment: Certainly! The limitations of score-based continuous-time diffusion models to generate data in a resolution-invariant way are precisely what our Neural SDE model addresses, so it is important to clarify this point. For the sake of brevity, we will reference [1] to avoid replicating the mathematical expressions in this response. The crux of training diffusion models using scores hinges on a result from Anderson [2] stating that the inversion of a diffusion process is still a diffusion process — only running backward in time. This process's drift can be described through the gradient of the density of the marginals, $\nabla_x \log p_t(x)$ (i.e. the "score"), as highlighted in equation (6) of [1]. However, when the state X evolves in an infinite-dimensional space (like paths in our context), the notion of density $p_t$ becomes ambiguous due to the absence of a canonical Lebesgue measure. One could, in theory, adopt an alternative reference measure on pathspace, such as the Wiener measure for instance, and derive the Radon-Nikodym derivative (if it exists) against this measure. Yet, such exploration remains uncharted territory, likely necessitating a deep mathematical dive. We hope this response answers your question.
**References**:
[1] Song, Yang, et al. "Score-based generative modeling through stochastic differential equations." arXiv preprint arXiv:2011.13456 (2020).
[2] Brian D O Anderson. "Reverse-time diffusion equation models." Stochastic Process. Appl., 12(3): 313–326, May 1982. | Summary: This paper introduces a novel approach for training Neural SDEs (Stochastic Differential Equations) as generative models for sequential data without using adversarial techniques. The authors propose a novel class of scoring rules based on signature kernels, which offer stability and avoid issues like mode collapse that are common in GAN-based training. The new formulation allows for memory-efficient adjoint-based back-propagation and enables the generation of spatiotemporal data, demonstrating superior performance over alternative methods in various tasks, including simulation of rough volatility models, conditional probabilistic forecasts of forex pairs, and mesh-free generation of limit order book dynamics.
Strengths: - The authors propose a new method with rigorous theoretical justification and verify it with numerical experiments.
- The proposed method does not require any adversarial training, so presumably training the generative model with the method is more stable than GANs.
- The proposed method achieves significantly better performance than SDE-GAN without adversarial training.
Weaknesses: I am not expert of this field, stochastic differential equations (SDE) for continuous-time generative models. So it is difficult to point out some critical weakness of the proposed method.
- I highly recommend that authors elaborate the evaluation metrics (e.g. KS-score) in detail for those who are not familiar with this field.
`
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is the proposed method applicable to generating images?
- How is the proposed model sensitive to hyper-parameters and what would be rule of thumbs to tune those hyperparameters?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Since my expertise is not stochastic differential equation, it is hard for me to point out critical limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Evaluation Metric**
The two-sample Kolmogorov-Smirnov (KS) test is a nonparametric statistical test used to determine whether two sets of samples come from the same continuous distribution on $\mathbb{R}$. The KS test statistic is the maximum absolute difference between two empirical cumulative distribution functions (CDFs). We appreciate that every reader may not be familiar with the KS test and are happy to amend the paper to include this explanation in the Appendix.
**Image Generation**
Yes, our method can be applied to images sampled at an arbitrary spatial resolution. As a training objective, one would use the scoring rule associated with one of the three kernels in $L^2$ used in the LOB example, and consider the solution of the Neural SPDE only at the final time instead of the whole solution trajectory.
**Hyperparameters**
In general, one can bucket the hyperparameters of the network into two categories: the “nice to have more” type (width and depth of neural networks governing the vector fields of the Neural SDEs, steps to train the model, dyadic order of the PDE solver associated to the signature kernel, step size in the generator SDE solver) which are largely constrained by computational considerations.
The other category is the set of hyperparameters which require more careful calibration. These include which path transformations to apply, and the appropriate learning rate to select. As with any generative model, successful training is especially sensitive to these choices. Due to space constraints, we did not go into this much detail in the body of the paper. However, in the preamble to Appendix B (and in Appendix B.1-5) we provide some general rules of thumb to choose these hyperparameters, including explicit choices to replicate the results we obtained. We also provide relevant commentary in each of the notebooks found in the code repository accompanying the paper.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank you for your answers. I keep my current score but with low confidence score as before. | Summary: This paper considered the training of neural SDEs, where are continuous-time generative models for sequential data. State-of-the-art approaches train neural SDEs in an adversarial manner and suffer from instability. In this work, a non-adversarial training approach is proposed for stable and effective training of neural SDEs. Particularly, a new class of scoring rules based on signature kernels is defined and used as objective for training. Extensive evaluations are performed to demonstrate the effectiveness of the proposed training objective.
Strengths: The differences between the related works and the proposed method are clearly discussed.
The proposed method seems solid in theory.
The possible extensions of the proposed method are extensively discussed.
Weaknesses: The significance of the work is somehow unclear to me. In the related work section, two works [PADD21] and [BO21] are discussed that are closely related to the proposed method. They are different from the proposed method in terms of the data type (discrete/continuous) and the setting of continuous-time processes (general/neural SDE specific). However, it is not clear how significant the proposed method is from the above two aspects.
The related work section is hard to follow. Lots of works are discussed from different perspectives: Neural SDE; score-based generative models; and scoring rules for generative networks. For me, these discussions are a bit disjointed. They are mainly discussed to show the differences between them and the proposed method. I think it is necessary to clearly locate the proposed method in the related literatures and then discuss. If there is a table or figure to visualize the overlaps between the proposed method and the other related methods, it will be clearer.
The proposed method is heavy in math. The notations are not clearly explained. For example, in line 95, what is Omega, F, and P? What is the law representing in line 97? More intuitive descriptions and clear definitions will greatly help the reading.
Though the mathematical foundation is solid, the empirical evaluation of the proposed method compared to SOTA methods is weak. Only SDE-GAN is considered for comparison. Score-based generative models (SGMs) are also leveraging scoring rules. Also, [BO21] introduced scoring rules for continuous-time processes. Since these works are closely related to the proposed method, it is necessary to compare with them. If can’t, it is necessary to explain the reasons, which I think also help clarify the significance of the proposed method.
There is a lack of experimental analysis. In section 4.1., there is no explanation or analysis for the values shown in Table 1. Why SDE-GAN is better at t=19? Similarly, in section 4.4, the conclusions of the evaluation discussed in section 4.4 are unclear to me.
The authors are claiming that the proposed method requires less memory compared with SOTAs. There is a lack of evaluation in terms of computational cost or the memory cost to support this claim.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the details questions in the weakness section. I would love to change my rating if the authors can address my concerns about the significance and the evaluation.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations of the proposed method are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Significance**
We believe the innovation of our work is twofold:
1) the introduction of a new class of scoring rules for infinite-dimensional spaces of paths using signature kernels, adaptable to spatiotemporal signals, and with strict properness and consistency guarantees;
2) the deployment of these scores to train Neural SDEs, resulting in a novel generator-discriminator pair which is mesh-free, offers memory-efficient backpropagation and surpasses other Neural SDE training methods in terms of stability and performance.
As mentioned in lines 80-84, in [PADD21] the authors construct statistical scores for discrete sequences, with strict properness guarantees only ensured under stringent Markov-type assumptions. A key aspect of our work is to develop consistent and proper scoring rules in the continuous-time, non-Markovian setting and use these in the context of generative modelling for functional data. Meanwhile, [B021] investigates scoring rules for continuous-time processes using truncated signatures, not signature kernels. Due to the truncation $N$ of the signature, these scores aren't strictly proper. Moreover, approaches based on truncated signatures are hindered by processes with values in $d$-dimensional spaces where $d$ is even moderate, because of the signature's exponential complexity $\mathcal{O}(d^N)$. Approaches based on truncated signatures are unusable for infinite-dimensional cases like our LOB experiments. In contrast, our signature kernel scores sidestep these computational challenges.
**Related work**
Our work clearly positions itself at the nexus of Neural SDEs (classically trained as GANs), and continuous-time diffusion models (classically trained using scoring rules). It's important to note that, while we reference continuous-time diffusion models, they inherently possess limitations when it comes to generating paths in a resolution invariant manner as we do. This limitation arises from the absence of a canonical Lebesgue measure in infinite dimensions which precludes a coherent notion of density (i.e. classically defined, in finite-dimensional measure theory, as the Radon–Nikodym derivative with respect to the Lebesgue measure), which is fundamental to the foundation of all score-based diffusion models.
**Notation**
While we respect and value the perspective of each reviewer, we note that there are differing opinions on this matter.
- Reviewer YPT1 states that *The theoretical part of the paper is well-written, with clear mathematical formulations and adequate definition of symbols*.
- Reviewer kdBJ says that *This paper is written in high clarity. All formulations and derivations are clearly elaborated.*
To answer specifically the questions raised:
- Line 95, as clearly stated $(\Omega, \mathcal{F}, \mathbb{P})$ is a probability space.
- Line 97, classical definition of law of a process.
**Empirical evaluation**
As explained above, score-based diffusion models do not apply to generations of functional data such as paths. Besides, the scoring rules introduced in [BO21] are based on truncated signatures, which is precisely the second baseline we consider in the experiments.
**Experimental analysis**
Due to space constraints, we were not able to present the majority of our experimental analysis in the body of the paper. However, we do provide a more comprehensive analysis in each of the relevant sections of Appendix B, including a discussion of the hyperparameters used to achieve each set of results, further evaluation metrics, and discussion regarding both the qualitative and quantitative results we obtained. We also provided comprehensive Jupyter notebooks corresponding to each of the examples presented in the body (unconditional, conditional, and spatiotemporal generation). Remarks regarding the results found in Table 1 can be found in the paragraph directly above it. We appreciate that the reader may not be familiar with the Kolmogorov-Smirnov test; we will include an explanation for it in the final version of paper. Regarding why SDE-GAN performs better at the t=19 marginal: although SDE-GAN indeed performs better, our model also performs close to the test threshold acceptance level of 5% at the same marginal, indicating that outperformance is likely immaterial. SDE-GAN performs significantly worse at later marginals, as shown in Appendix B.1. Regarding the concern related to section 4.4 we refer to our answer to Reviewer YPT1.
**Memory**
We note that we do not claim anywhere in the paper that our method requires less memory than SDE-GAN, but rather that it belongs to the class of Neural SDEs, which offer memory-efficient backpropagation (aka *adjoint methods* or *optimise-then-discretise*). We also note that the computational cost of the signature-PDE solver associated to our discriminator can be found in [1] and a general discussion of the computational complexity associated to solving Neural SDEs can be found in [2]. We will include these considerations in the final version of the paper.
**References**
[1] Salvi, Cristopher, Thomas Cass, James Foster, Terry Lyons, and Weixin Yang. "The Signature Kernel is the solution of a Goursat PDE." SIAM Journal on Mathematics of Data Science 3, no. 3 (2021): 873-899.
[2] Kidger, Patrick. "On neural differential equations." arXiv preprint arXiv:2202.02435 (2022).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors’ responses. My questions are all answered. However, I still have a concern related to the presentation of the paper. For an audience who may not be deeply familiar with the concepts presented, I find the current presentation is not easy to follow. In my opinion, it is necessary for the authors to improve the presentation by minimizing assumptions about the readers’ background knowledge.
Overall, I think this work is novel and theoretically solid. Given the authors’ responses and potential improvements to the presentation, I would like to raise my score. | Rebuttal 1:
Rebuttal: We thank all the reviewers the useful feedback and insightful questions. We hope that our responses will address all raised concerns. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper introduces a novel approach to training Neural SDEs using non-adversarial methods based on signature kernel scores. The authors demonstrate that the signature kernel score is strictly proper and provide consistent estimators for such scores. The effectiveness of their approach is demonstrated in various tasks.
Strengths: 1. The method is novel as it offers a non-adversarial alternative to traditional GAN-based training methods for Neural SDEs. This work is likely the first to introduce the signature kernel method to the Neural-SDE model. It achieves superior performance compared to the GAN method while being easier to train.
2. The theoretical part of the paper is well-written, with clear mathematical formulations and adequate definition of symbols. The proof of strict properness and consistency, although direct consequences of prior works, offers robust guarantees.
Weaknesses: 1. Limited comparisons: The authors claim that their procedure outperforms alternative ways of training Neural SDEs, but they only compare it with the SDE-GAN. In the related works, the authors mention the latent SDE, which is easier to train than SDE-GAN, and claim that it yields worse performance than the SDE-GAN. While I acknowledge that GAN generally has better model capacity than VAE, can you provide quantitative results to substantiate this claim? The original paper only compares with the latent ODE from 2019.
2. Lack of visualizations: Although the paper visualizes the generated distribution of paths conditioned on a previous path using their model, it would be beneficial to include a comparison with the paths generated by the SDE-GAN. This would provide a clearer understanding of how your outputs capture more characteristics of financial markets. Furthermore, the generated paths exhibit a large variance, which could be addressed or explained.
3. Explanation of the LOB experiments: The simulation of LOB dynamics and the decision not to use autoregressive prediction are interesting, but how is the proposed design better than the five autoregressive generators that are listed? The paragraph discussing this topic appears to be hastily written. The reported average scores alone do not provide sufficient information.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Can you provide quantitative results to show latent-SDE performs worse than SDE-GAN and your model?
2. Can you visualize conditional path distributions generated by SDE-GAN and explain why your generated paths are better?
3. Could you provide more details on how your model is useful in limit order book (LOB) prediction and elaborate on the design choices you made?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have well discussed the limitations. The work assumes the noise is continuous without jump on event arrival, therefore is not applicable to noise governed by point processes. The work pose certain regularization conditions on the sampled paths. The work could benefit from including more varied evaluation metrics to assess the performance of the generated data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparisons**
The comparison between SDE-GANs and latent SDEs (and the limitations of the latter) has been discussed in the PhD thesis [1, Section 4.3.3] and further analysis as well as quantitative results can be found in [2]. Therefore we decided to only include the most expressive model among all Neural SDEs in the benchmark and showcase that our approach, while non-adversarial, outperforms SDE-GANs, contrarily to latent SDEs. We will refer the reader to [1,2] to support the claim made in the related work section.
**Visualizations**
We did not visualize conditional path distributions generated by SDE-GAN, as this would require significant modifications to the existing SDE-GAN implementation (available in the torchsde package). The authors of SDE-GAN do present a conditional example in their work, however, it is a class-conditioning example where the conditioning variable is discrete. For conditioning data arising from a continuous variable in an infinite-dimensional space (such as paths), this procedure cannot be implemented.
In the unconditional setting, our generated paths do not have a larger variance than those exhibited by the data measure (as confirmed by the Kolmogorov-Smirniov test on the marginals). This holds true in the conditional setting. We agree with your suggestion to further comment on Figure 2. Therefore, we propose to add the following remark: Although the three conditional distributions exhibit a larger variance (from left to right), this is to be expected, since they correspond to three different (increasing) levels of quadratic variation exhibited by the conditioning paths.
**LOB Experiment**
The primary objective of the LOB experiment is to demonstrate that the newly proposed signature kernel scores can be used to effectively train continuous space-time generative models. However, we agree with the reviewer that our modelling choices can be further motivated and propose to add the following comments.
Autoregressive generators produce “one-step-ahead” predictions by parameterizing the conditional distribution of a process at a specific point in time, conditioned on the previous k observations. Instead, we propose to use the Neural SPDE model as a generator which directly produces continuous time LOB trajectories in a discretization-free manner (also see the answer to Reviewer xQXq under the **Significance** header). Furthermore, we note that recent works in mathematical finance [3,4] have established that LOB dynamics are well-described by mechanistic models in the form of SPDEs. In light of these results, the Neural SPDE model is a natural choice for the generator as it offers a strong prior on the model space.
We report the KS test statistic for each individual marginal, both in the spatial and temporal dimensions, in line with the other experiments in the paper. We recognize that it would be beneficial to include additional evaluation metrics. However, well-established and general-purpose metrics (that do not require expert knowledge on the specific task at hand) for assessing functional spatiotemporal generative models are still lacking. This gap arises due to the novelty of this field. Potential approaches could involve computing spatiotemporal semivariograms. However, we did not find adequate Python libraries for our purpose. Another direction to explore as future work would be to employ a “train on synthetic, test on real” methodology [5]. We will mention this in the future work section.
**References**
[1] Kidger, Patrick. "On neural differential equations." arXiv preprint arXiv:2202.02435 (2022).
[2] Kidger, Patrick, et al. "Efficient and accurate gradients for neural sdes." Advances in Neural Information Processing Systems 34 (2021): 18747-18761.
[3] Hambly, Ben, Jasdeep Kalsi, and James Newbury. "Limit order books, diffusion approximations and reflected SPDEs: from microscopic to macroscopic models." Applied Mathematical Finance 27.1-2 (2020): 132-170.
[4] Cont, Rama, and Marvin S. Müller. "A stochastic partial differential equation model for limit order book dynamics." SIAM Journal on Financial Mathematics 12.2 (2021): 744-787.
[5] Esteban, Cristóbal, Stephanie L. Hyland, and Gunnar Rätsch. "Real-valued (medical) time series generation with recurrent conditional gans." arXiv preprint arXiv:1706.02633 (2017). | null | null | null | null | null | null |
Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting | Accept (poster) | Summary: The authors claim that they propose OpenVik which consists of an open relational region detector to detect regions potentially containing relational knowledge and a visual knowledge generator that generates format-free knowledge by prompting the large multimodality model with the detected region of interest. However, some issues are concerned.
Strengths: The authors claim that they propose OpenVik which consists of an open relational region detector to detect regions potentially containing relational knowledge and a visual knowledge generator that generates format-free knowledge by prompting the large multimodality model with the detected region of interest.
Weaknesses: 1. Paper has weak technical depth, it requires more technical details.
2. Further experimental analysis needed. Please give qualitative validation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Paper has weak technical depth, it requires more technical details.
2. Further experimental analysis needed. Please give qualitative validation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Technical depth and details***
**A**: We would like to stress several distinct technical contributions of OpenVik compared with existing models:
- **Open relational region detector**: Existing detectors often focus on locating objects, while OpenVik is trained to directly detect regions that capture **interactions** and **abstract semantic structures**, such as vivid verbs like “attached to” and nuanced details like “full of”. Some alternative region detectors often need additional visual controls, such as mouse clicks [1, 2] or language controls [3] on a combination of predefined sets of textual properties. OpenVik saves this additional input with its automatic visual grounding ability, which thus improves **knowledge diversity and freshness** significantly.
- **Knowledge generator**: One big difference is that OpenVik is the reasoning-driven generation. Existing regional captioners or knowledge generators often rely on object-level annotation, where the text decoder generates descriptions based on the localized object set [4]. **This leads to the model working like a bag-of-word and a lack of deep semantic understanding.** OpenVik provides **better knowledge grounding** by conditioning the generator on the detected relational region. It includes the ability to **automatically discover the types of interest** that are not only salient but also benefit downstream relational reasoning, as proved in various downstream tasks in Section 5.
- **Language diversity**: Training such a new paradigm for open relational visual knowledge extraction is not trivial. The lack of diverse training data and distribution bias present significant challenges. As shown in Section 4.3 and Figure 3, the **diversity-driven data enhancement** strategies put forth in our work can effectively optimize knowledge richness. They are also generalizable to other models or backbones.
In summary, our work has attempted to explore uncharted territory in the field, with extensive knowledge evaluation and a variety of downstream tasks demonstrating its effectiveness.
***Further experimental analysis and qualitative validation***
**A**: Thank you for emphasizing the importance of experimental analysis, particularly with respect to qualitative validation. In our study, we have conducted a comprehensive evaluation of the proposed open visual knowledge extraction pipeline. This includes internal assessment of knowledge quality and external evaluation of downstream tasks.
- **Internal Knowledge Quality Evaluation**: We assessed the knowledge generation performance using traditional generative metrics and conducted an in-depth analysis of knowledge quality. Moreover, we compared our approach with existing knowledge sources, as illustrated in Figure 2.
- **External Downstream Tasks Evaluation**: To gauge whether OpenVik's extracted open visual knowledge can enhance reasoning and inference capabilities in various applications, we explored three multimodal tasks, detailed in Section 5. Our analysis included extensive qualitative and quantitative assessments of the results.
- **Qualitative Validation**: We also recognized the importance of providing concrete examples to illustrate our findings. Therefore, Appendix F included a case study featuring qualitative examples of Open Visual Knowledge from OpenVik. More qualitative illustrations from the three downstream applications are available in Appendix G.
By combining these different methods of evaluation, our study provides a well-rounded perspective on the effectiveness and potential applications of OpenVik. We appreciate the valuable feedback and will ensure that the final version of our paper highlights these aspects in a clear and compelling manner.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer QgT9,
We hope our detailed rebuttal has addressed some of your concerns. As we are getting really close to the deadline of the discussion phase, we would really appreciate it if you could kindly give us a chance to further address your questions and/or concerns if any.
Many thanks,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for the detailed explanation to my questions. I've carefully read your reply and other reviewers' comments. Therefore, I've changed my rating from 6 to 7.
---
Reply to Comment 1.2.1:
Comment: Dear reviewer QgT9,
Thank you for raising the score and we will make sure to properly incorporate the additional discussions in the rebuttal into our revision. | Summary: Authors present OpenVik, a method for open visual knowledge extraction. It consists of an open relational region detector to detect regions potentially containing relational knowledge and a visual knowledge generator that generates format-free knowledge by prompting the large multimodality model with the detected region of interest. They explore two data enhancement techniques for diversifying the generated format-free visual knowledge. Extensive knowledge quality evaluations highlight the correctness and uniqueness of the extracted open visual knowledge by OpenVik. Moreover, they show consistent improvements across various visual reasoning applications, indicating the real-world applicability of OpenVik.
Strengths: - Well-written
- Extensive Evaluations (Ablation Study and Applications)
- OpenVik outperforms traditional approaches and generates rich Scene Graphs
Weaknesses: - The importance of the pre-training for the Open Relational Region Detector is mentioned but not evaluated
- It seems that the performance of OpenVik is mainly based on BLIP's performance
Ablation of OpenVik vs. BLIP is missing.
- Usefulness of Scene Graph needs to be evaluated instead of diversity
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How useful are the generated facts?
There seems to be a large variance in the entities, relations and the overall details in the facts. When looking at the produced facts, it can be seen that the facts generated for individual image patches could be quite useless and random, e.g. "dark brown mane growing behind head", and also not related to the main entities in the image, e.g. zebra.
There is no real structure in the facts.
- Do the generated facts suffer from typical problems like hallucination?
If this is the case authors should also elaborate on that.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper should elaborate more on the difficulties and the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***The importance of the pre-training for the Open Relational Region Detector is mentioned but not evaluated***
**A**: Thanks for the observation. Please refer to our added results under ***Q2*** in the global response.
>***Ablation of OpenVik vs. BLIP***
**A**: While BLIP primarily focuses on whole image captioning, making it not directly comparable to OpenVik's relational knowledge generation. We take the relational regions detected by our trained region detector as the input for BLIP decoder. Please refer to our updated results under ***Q1*** in the global response.
>***Usefulness of Scene Graph needs to be evaluated instead of diversity***
**A**: Besides the internal knowledge quality evaluation, such as diversity, three downstream external tasks in Section 5 extend beyond these measures.
These tasks are designed to provide external validation of the knoowledge effectiveness generated by OpenVik and its capability to enhance downstream tasks. We follow a zero-shot setting, while this augmentation can also be conducted with other backbones. Results demonstrate that the backbone model significantly benefits from the context provided by OpenVik, underscoring the value of the extracted visual knowledge.
To demonstrate generalizability, we expanded our comparisons to include an additional contemporary backbone variant (BLIP, R2C [1]) for each application, as shown below (See full Tables 13-15 in the attached PDF). The results confirm that this added model benefits from OpenVik's contexts, similar to the zero-shot ones.
| Method | Recall@1 | Recall@5 | Recall@10 | Avg |
| -------------------- | ----- | ------- | ------ | -------- |
| BLIP | 63.11 | 86.30 | 91.10 | 80.17 |
| OpenVik + BLIP | 65.23 | 87.71 | 91.90 | 81.61 |
| Method | Accuracy | Precision | Recall | F1 |
| -------------------- | ----- | ------- | ------ | -------- |
| BLIP | 70.42 | 65.32 | 69.25 | 67.23 |
| OpenVik + BLIP | 80.25 | 72.55 | 70.61 | 71.57 |
Method | Accuracy | Precision | Recall | F1 |
| ----------------- | -------- | --------- | ------ | ----- |
| R2C | 62.50 | 62.50 | 62.45 | 62.47 |
| OpenVik + R2C | 67.40 | 67.54 | 67.43 | 67.48 |
[1] Zellers, Rowan, et al. "From recognition to cognition: Visual commonsense reasoning." CVPR, 2019.
>***How useful are the generated facts? the facts could be quite useless and random, e.g. "dark brown mane growing behind head", and also not related to the main entities in the image, e.g. zebra. There is no real structure in the facts.***
**A**: Thank you for your insightful observation regarding the variance in the generated facts. It is essential to understand that the task output (such as "dark brown mane growing behind head") is intentionally not following the conventional SVO structure. Our goal is to explore and capture nuanced visual semantics in a more flexible and unconstrained manner, without being restricted by fixed formats. Let's break down the semantics of this example:
- Subject: "mane" (referring to the hair on the neck of a mammal like a horse, lion, or zebra)
- Adjectives: "dark brown" (specifying the color of the mane)
- Verb: "growing" (describing the appearance or position of the mane)
- Prepositional Phrase: "behind head" (providing the location of the mane)
This description offers a detailed and relevant depiction of the main entity, focusing on attributes such as color and position, and does not adhere strictly to the SVO structure.
Visual relational knowledge, as opposed to traditional text-based extraction, emphasizes intricate details like tools, sizes, and spatial relationships. By capturing these complexities, our approach complements existing knowledge literature established on texts, enhancing logical reasoning and promoting explainable AI in visual tasks. This can also alleviate hallucination problems in QA/captioning and add a layer of relational regularization to factual knowledge, mitigating bias in large language model prompting.
We do recognize that the language generation process may lead to variations in the quality and structure of the facts generated. This is reflective of the diversity and intricacy found within visual information and indeed points to areas for further refinement in our model. However, this does not diminish the significance of open visual knowledge extraction, which serves as an indispensable component in capturing the visual aspect of world information.
>***Do the generated facts suffer from hallucination***
**A**: Thank you for raising the important question regarding potential hallucinations in knowledge generation. It is insightful to recognize that unbalanced and noisy distributions within the training data can indeed lead to errors in the knowledge produced. If we view hallucinations as unwarranted inferences based on input, then such inaccuracies in OpenVik and comparable baselines are more typically a result of detection errors due to data bias by associating wrong features to a class/label. Two illustrative failure cases can be found in the attached PDF, Figure 1. For example, in the left figure, a ladder has been misidentified as a towel, leading to the erroneous description of a 'blue towel hanging from dry shower.' In the right figure, a 'black speaker by flat tv' is generated, although the speaker is not present in the image—possibly reflecting common co-occurrences within the data set. To solve this problem, the key is identifying the cofounder feature of class labeling.
The unwarranted inference error is more prevalent across QA and captioning tasks, rather than the fact generation process. However, OpenVik is helpful to mitigate the hallucinations in QA and captioning since it can discover unique relational facts. OpenVik's performance metrics demonstrate that it maintains a desirable balance between diversity and validity, offering more accurate grounding that can enhance high-level tasks such as storytelling.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer nGQ3,
We hope our detailed rebuttal has addressed some of your concerns. As we are getting really close to the deadline of the discussion phase, we would really appreciate it if you could kindly give us a chance to further address your questions and/or concerns if any.
Many thanks,
Authors | Summary: This paper introduces a novel approach to visual knowledge extraction named OpenVik. It comprises three main components: the Open Relational Region Detector, the Format-Free Visual Knowledge Generator, and the Diversity-Driven Data Enhancement module.
The Open Relational Region Detector, built on the object detection framework FasterRCNN, is fine-tuned to discern relation-oriented visual regions in images. The Format-Free Visual Knowledge Generator's purpose is to utilize this relational region information to create more relation-relevant visual knowledge from an image. The last component, the Diversity-Driven Data Enhancement module, functions as a knowledge post-processing module. Its primary goals include filtering out extraneous knowledge facts and enriching relations via synonym substitution, leveraging external knowledge resources.
The experimental results indicate that OpenVik surpasses previous benchmarks in most test scenarios, implying its effectiveness in generating high-quality relational knowledge from images.
Strengths: 1. The visual representation of the data through figures and charts is commendable. In particular, Figure 1 effectively illustrates the primary components of OpenVik. The color-coded highlights for 'entities', 'relations', and 'descriptive details' significantly enhance the readability of the examples.
2. The architectural design of the model, along with the selected loss functions, is robust and well-conceived. The three components harmoniously co-function to train on current relational datasets and subsequently generate novel open visual knowledge from images.
3. The level of detail in the explanation of the proposed method is noteworthy. The comprehensive presentation allows readers to gain a thorough understanding of OpenVik even after a single read-through.
Weaknesses: 1. The paper could benefit from a stronger justification for its approach. With the existence of established multimodal models like BLIP[1], mPLUG[2], OFA[3], and recent advancements like BLIP-2[4], MiniGPT4[5], all of which can potentially perform Visual Knowledge Extraction through prompting, the specific need for OpenVik is not explicitly clarified. Further explanation about what unique advantages OpenVik brings would be beneficial.
2. The comparative analysis in section 4.1 appears unbalanced. The use of older models, such as the captioning model and relational captioning model, for benchmarking does not provide a robust basis for comparison. It would be more persuasive if contemporary models were included in the comparison.
3. The ablation study appears to be not comprehensive enough. Crucial distinctions, such as those between the Open Relational Region Detector and the standard FasterRCNN, are not sufficiently elucidated. More detailed examinations of such comparisons would significantly enhance the paper's depth of analysis.
4. The paper lacks clear explanation regarding the evaluation settings, such as the specific split used during evaluation. While I did find these details eventually in the supplementary material, I would recommend the authors incorporate such essential information directly within the main body of the paper. This would ensure a smoother reading experience and easier access to important technical specifications.
[1] BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
[2] mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections
[3] OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
[4] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[5] MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. With the existence of general multimodal models, why do we need OpenVik?
Please answer my questions mentioned in the **Weaknesses** .
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 2 fair
Limitations: Good
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***What unique advantages OpenVik bring compared with existing multimodal models? With the existence of general multimodal models, why do we need OpenVik***
**A**: Thank you for raising the question regarding the need for OpenVik in the context of existing multimodal models. The specific need for OpenVik lies in addressing key challenges in visual relational knowledge that current models still grapple with:
- **Current Models Exhibit Deficiencies in Compositional Understanding**: Despite the success of large vision and language models (VLMs) in many downstream applications, it is unclear how well they encode compositional information. Recent studies have demonstrated that state-of-the-art VLMs have poor relational understanding and often rely on object-centric shortcuts, leading to the phenomenon that the models behave like a bag of words. This can blunder when linking objects to their attributes, and demonstrate a severe lack of semantic role sensitivity [1, 2]. OpenVik's focus on relational knowledge extraction encourages deep semantic structure understanding, and visual prompting enables semantic grounding across modalities.
- **Relational Knowledge Helps in Complex Planning and Reasoning**: Tasks requiring intricate comprehension, such as planning and reasoning, benefit from relational knowledge [3, 4, 5, 6]. The relational knowledge from OpenVik can enhance compositional generalization in visual reasoning, thereby contributing to the system's trustworthiness and robustness.
- **Openness and Diversity**: OpenVik's specialized approach to open relational detection, format-free knowledge generation, and diversity-driven data enhancement provides unique advantages for discovering nuanced, detailed, verbs that are less seen in the training set.
[1] Yuksekgonul, Mert, et al. "When and why vision-language models behave like bag-of-words models, and what to do about it?." ICLR, 2023.\
[2] Hendricks, Lisa Anne, and Aida Nematzadeh. "Probing image-language transformers for verb understanding." ACL, 2021.\
[3] Li, Bo, et al. "Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness." arXiv preprint arXiv:2304.11633 (2023).\
[4] Wang, Renhao, et al. "Programmatically Grounded, Compositionally Generalizable Robotic Manipulation." ICLR, 2023.\
[5] Kurenkov, Andrey, et al. "Modeling Dynamic Environments with Scene Graph Memory." International Conference on Machine Learning. PMLR, 2023.\
[6] Hsu, Joy, Jiayuan Mao, and Jiajun Wu. "DisCo: Improving Compositional Generalization in Visual Reasoning through Distribution Coverage." TMLR, 2022.
***It would be more persuasive if contemporary models were included in the comparison***
**A**: We thank the reviewer for the insightful comments. Please refer to our added results under ***Q1*** in the global response.
***Crucial distinctions, such as those between the Open Relational Region Detector and the standard FasterRCNN, are not sufficiently elucidated***
**A**: Please note that a direct comparison between the two may not be straightforward due to their differing objectives. The standard FasterRCNN is designed for object-level detection and classification, wherein the objects are restricted to a predefined set. In contrast, our Open Relational Region Detector is trained to automatically detect relational regions potentially containing relation-oriented knowledge, which is a more complex and nuanced task.
In response to the comments, we have added an ablation study to illustrate the effects of loading a pre-trained detector backbone versus training the detector from scratch without pre-training. Please refer to our results under ***Q2*** in the global response.
***Incorporate evaluation settings directly within the main body of the paper***
**A**: Thanks for the suggestions! Due to the page limit and the experiment quantity of knowledge generation as well as three downstream tasks, we only included the main module implementation details in the main body, with the specifications for downstream evaluations in Appendix C and D. We appreciate your suggestions on the significance of such details and will ensure that the essential evaluation specifications are included in the main body of the final version.
---
Rebuttal Comment 1.1:
Title: Reply to Author's Rebuttal
Comment: Thanks for the detailed explanation to my questions. I've carefully read your reply and other reviewers' comments. The ablation study in global response do convince me the effectiveness of the proposed model. Therefore, I've changed my rating from 5 to 7.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the reply and we are happy to note that we have addressed your concerns in the rebuttal. We will make sure to properly incorporate the additional results and discussions in the rebuttal into our revised paper. | Summary: This paper that introduces a new paradigm of open visual knowledge extraction called OpenVik. This method generates format-free knowledge by prompting a large multimodality model with detected regions of interest. The proposed framework consists of an open relational region detector and a format-free visual knowledge generator. The paper highlights the limitations of existing approaches to visual knowledge extraction and demonstrates the correctness and uniqueness of the extracted open visual knowledge by OpenVik. The extracted knowledge is integrated across various visual reasoning applications, showing consistent improvements and indicating the real-world applicability of OpenVik.
Strengths: 1. Novel Approach: The paper introduces a new paradigm, OpenVik, for open visual knowledge extraction. This system generates format-free knowledge by prompting a large multimodality model with detected regions of interest, which is a novel approach in the field.
2. Comprehensive Evaluation: The paper provides a thorough evaluation of the generated knowledge, using traditional generative metrics and in-depth knowledge quality assessment. This comprehensive evaluation helps validate the effectiveness of the proposed method.
3. Comparison with Existing Knowledge Sources: The paper compares the extracted visual knowledge with non-parametric knowledge in existing knowledge graphs and parametric knowledge from large language models. This comparison provides a clear understanding of the unique value proposition of OpenVik.
4. Real-World Application: The extracted knowledge is integrated across various visual reasoning applications, showing consistent improvements. This indicates the real-world applicability of OpenVik, making it a practical solution for visual knowledge extraction.
Weaknesses: 1. Limited Dataset: The training data for the model is built based on Visual Genome and its relation-enhanced version Dense Relational Captioning. The performance of the model on more diverse datasets remains to be studied, which could limit its generalizability.
2. Implementation Complexity: The model involves complex implementation details, including the use of a ResNet50-FPN backbone for the open relational region detector and fine-tuning of the visual knowledge extractor. This complexity could make the model difficult to implement and adapt for other researchers or practitioners.
3. Limited comparisons and lack of ablation studies: The authors only compare to a few baselines, but there are many knowledge-enhanced models which need to be compared in the application section.
Other detailed questions are listed in Questions section.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. There are many other notions in Figure 1, such as L_k, can the authors explain all of them or list all the equations?
2. What is binary mask (line 113) in the format-free visual knowledge generator?
3. What is the training loss of the method? It is not clear to me, since there are many notions in Figure 1 but not appearing in the paper.
4. Can this method be scale to larger training dataset, or other backbones?
5. In the evaluation of in-depth knowledge quality, the authors mentioned they randomly selected 100 images. How about the compared method, did they use the same images or same number of images? How many raters or annotators in the studies?
6. In the application section, are the results of OpenVik zero-shot or fine-tuning? The implement details are not clear in the applications, and need to compare with more other methods.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***The performance of the model on more diverse datasets remains to be studied. Can this method be scaled to larger training dataset, or other backbones***
**A**: In this work, we chose Visual Genome and its relation-enhanced dataset as the benchmark because they are rich in relational region descriptions, making them the most suitable public data available for training a visual relational knowledge extractor, as far as we know in the existing literature.
It's essential to clarify, however, that our proposed methods are not constrained to these specific datasets or architectures. They have been designed to be model-agnostic and can be flexibly adapted to other backbones or tested on more diverse datasets. The challenge of gathering such datasets that are capable of supporting deep-semantic understanding and cross-modality grounding is complex. We recognize this as an important avenue for future exploration and are eager to expand our research based on the current work.
>***Implementation complexity***
**A**: Our model is composed of two main components: the region detector and the knowledge generator. Both leverages established backbones. We've tailored these designs specifically for open relational tasks, ensuring we avoid adding undue model complexity. To facilitate ease of implementation and to encourage further research in this domain, we intend to release our comprehensive implementation code along with trained model checkpoints.
>***Need to compare with more other methods for the applications and implementation details***
**A**: In our work, three application tasks were utilized to evaluate the efficacy of OpenVik's generated knowledge by augmenting the input with generated knowledge for the challenging zero-shot settings. It's important to recognize that this augmentation can also be adapted to different backbones or in varied settings, while the optimization for incorporating knowledge across diverse applications is beyond the scope of this work.
To further showcase generalizability, we have included additional results (Tables 13–15 in the attached PDF) using a contemporary backbone for each application. The results confirm that this new backbone model benefits from OpenVik's contexts, similar to the zero-shot baseline. This finding further substantiates the usefulness of our approach to extracting visual relational knowledge.
We acknowledge and appreciate your valuable feedback, and we agree that these expanded comparisons and clarifications will strengthen the paper.
For those seeking more specific details on the application implementations, please refer to the following:
- Retrieval: lines 276-279 (zero-shot, ZS) and 300-307 (knowledge augmented, knowledge+)
- GSR: lines 296-299 (ZS) and lines 300-307 (knowledge+)
- VCR: lines 323-325 (ZS) and 326-329 (knowledge+)
>***Notations and training loss***
**A**: Thanks for pointing out the oversight that needs further clarification. We have listed all notations in Tables 10 and 11 in the attached PDF and will ensure to include these crucial details in the final version.
Specifically, for the relational region detector, $L_{RD}$ is the regional regression loss from the detector, and $L_{K}$ is the supervision from the ground truth relational knowledge. The training objective $L_l$ of the relational region detector is formulated as $L_v = L_{RD} + L_{K}$.
- **Region Regression**: This part is guided by our created regional box labels, denoted as $U_j$. More precisely, the foreground of these relation-centric region labels is created by taking the union of the object-level bounding boxes of the entities, such as *boat*, *water*, contained in a ground truth region knowledge description $V_j$. This forms the smooth L1 loss $L_{RD}$ [1] for region regression.
- **Knowledge Supervision**: To assist with the refinement of the bounding box, we replaced the object-centric label classification in traditional object detectors with knowledge supervision. A pre-trained generator is finetuned to create the regional description grounded to the given region. This is supervised by the cross entropy loss $L_K$ with region description $T_j$ [2].
[1] Ross Girshick. Fast r-cnn. ICCV 2015.\
[2] Li et al. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. ICML, 2022.
>***What is binary mask in the generator***
**A**: Each binary mask represents a detected region from the open region detector, based on which the model would generate region-specific knowledge.
>***Details for in-depth knowledge quality evaluation***
**A**: All the methods compared were based on the same set of 100 randomly selected images to ensure a fair comparison. As detailed in Appendix D lines 561–562, we utilized three different raters for the study. The calculated average pairwise Cohen's κ value is 0.76, demonstrating good agreement.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks for your detailed responses. But I think all the notations should be more clear and easier to understand, i.e., what do you mean "The training objective $L_l$ of the relational region detector is formulated as $L_v = L_{RD} + L_K$", then how to calculate $L_l$? Also, I think the improvements using BLIP is sort of minor, so does it matter to choose which backbones?
However, I think the responses answer some of my questions, and I am happy to raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the constructive feedback. We are pleased to note that we could address some of your concerns in the rebuttal.
We apologize for any confusion regarding the notation. To clarify for the specific one, there was a typo and the training objective $L_v$ of the relational region detector should be expressed as $L_v=L_{RD}+L_K$. As for the calculation of $L_l$ , please refer to equation 2 in the paper. We will check our notations systematically in the revision and make necessary rearrangements.
On the topic of backbones, our proposed strategies, notably relational-oriented prompting and diversity-driven data enhancement, are designed with versatility in mind. They can be seamlessly integrated with various vision-language backbones, including ALIGN [1], VL-BART(T5) [5], and SimVLM [6], among others. This adaptability underscores the robustness and broad applicability of our approach.
Once again, thank you for your time and insights. We will make sure to properly incorporate the additional discussions in the rebuttal into our revised paper.
[1] Jia, Chao, et al. "Scaling up visual and vision-language representation learning with noisy text supervision." ICML, 2021.
[2] Cho, Jaemin, et al. "Unifying vision-and-language tasks via text generation." ICML, 2021.
[3] Wang, Zirui, et al. "Simvlm: Simple visual language model pretraining with weak supervision." ICLR, 2022. | Rebuttal 1:
Rebuttal: >***Q1. Adding contemporary regional captioning baselines***
**A**: We appreciate the helpful suggestions on adding region captioning baselines. Note that although the proposed task in our paper has some similarities to region captioning, we would like to highlight the crucial difference in OpenViK: **it is designed to automatically detect regions that can be grounded in diverse and fresh relational knowledge, in contrast with traditional methods where the region detectors primarily focus on object-level areas, or the generators are learned on the bag of words over a fixed class of objects**.
Following the suggestion from reviewers PUUs and GNm1, we expanded our comparison baselines to include more region captioning methods, including Sub-GC [1], BLIP [2], and BLIP2 [3]. The results are shown below (A full comparison with regional captioning methods is included in Table 12 of the attached PDF).
| Method | BLEU | ROUGE-L | METEOR | Validity | Conformity | Freshness | Diversity |
| ---------- | ----- | ------- | ------ | -------- | ---------- | --------- | -------- |
| Sub-GC | 0.272 | 0.263 | 0.221 | 0.892 | 0.871 | 0.795 | 0.547 |
| BLIP | 0.264 | 0.266 | 0.252 | 0.886 | 0.855 | 0.760 | 0.531 |
| BLIP2 | 0.275 | 0.285 | 0.257 | 0.892 | 0.871 | 0.766 | 0.535 |
| OpenVik | 0.280 | 0.283 | 0.250 | 0.907 | 0.883 | 0.809 | 0.619 |
It shows that OpenViK has a similar accuracy performance to the recently proposed SOTA approach but is superior in diversity and freshness metrics, which indicates a deep understanding and richer reasoning ability of OpenViK. The comparison also illustrates that OpenVik's specialized designs, including contrastive decoding and knowledge diversity regularizers, confer clear advantages regarding diversity and freshness.
[1] Zhong, Yiwu, et al. "Comprehensive image captioning via scene graph decomposition." ECCV, 2020. \
[2] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International Conference on Machine Learning. PMLR, 2022. \
[3] Li, Junnan, et al. "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models." arXiv preprint arXiv:2301.12597 (2023).
>***Q2. Ablation on the pre-training for the open relational region detector***
**A**: To address this concern, we have conducted an additional ablation study, contrasting the outcomes when loading a pre-trained detector backbone with training the detector from scratch (The full ablation can be referred to in Figure 13 of the attached PDF):
| Variant | BLEU | ROUGE-L | METEOR | Validity | Conformity | Freshness | Diversity |
| -------------------- | ----- | ------- | ------ | -------- | ---------- | --------- | --------- |
| w/o PreDet | 0.201 | 0.275 | 0.230 | 0.812 | 0.833 | 0.701 | 0.502 |
| Full Model | 0.280 | 0.283 | 0.250 | 0.907 | 0.883 | 0.809 | 0.619 |
The ablation results show that omitting the pre-training step of the FasterRCNN model tends to result in the detection of more overlapping regions. This, in turn, causes a noticeable decrease in both knowledge diversity and freshness in the detected regions, which indicates the importance of loading the pre-trained model for region detection.
Pdf: /pdf/cca6197d15f804d0acd8dbfaa6b755f98e6f6330.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a new paradigm called open visual knowledge extraction and designs a framework OpenVik to generate format-free knowledge instead of pre-defined format knowledge. The authors also present two data enhancement technologies to ensure the diversity of knowledge. Moreover, the paradigm could also be integrated with other applications to jointly boost the performance.
Strengths: 1. The paper is well-written and easy to understand.
2. The proposed open relational region detector is interesting.
3. The evaluation for in-depth knowledge quality is significant.
Weaknesses: 1. The originality of the work is incremental. Indeed, both open relational region detector and format-free visual knowledge generator are minor modification existing models.
2. The experiments presented in this paper are insufficient. The comparison baselines should be more explored because the task of this paper is quite similar to region captioning task, such as [1], [2].
3. The word ‘open’ is confusing in the title. The paper could not tackle the open-world knowledge. It is still based on the close-set of objects.
4. The implementation of open relational region detector lacks the detailed information, such as the update of bounding box. The label set raises another question.
References
[1] Zhong, Yiwu, et al. "Comprehensive image captioning via scene graph decomposition." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16. Springer International Publishing, 2020.
[2] Ghosh, Shalini, et al. "Generating natural language explanations for visual question answering using scene graphs and visual attention." arXiv preprint arXiv:1902.05715 (2019).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. If the supervision is the generated description, how could it guarantee the correspond label for the region that contains the same objects and relations? Is it a few-shot task?
2. More detail of generator should be provided. What is the input of generator? Is the single region or the whole image?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***The originality of the work is incremental and made minor modifications to existing models***
**A**: We would like to clarify several distinct differences between OpenVik and existing models:
- **Open relational region detector**: Existing detectors often focus on locating objects, while OpenVik is trained to directly detect regions that capture **interactions** and **abstract semantic structures**, such as vivid verbs like “attached to” and nuanced details like “full of”. Some alternative region detectors often need additional visual controls, such as mouse clicks [1, 2] or language controls [3] on a combination of predefined sets of textual properties. OpenVik saves this additional input with its automatic visual grounding ability, which thus improves **knowledge diversity and freshness** significantly.
- **Knowledge generator**: One big difference is that OpenVik is the reasoning-driven generation. Existing regional captioners or knowledge generators often rely on object-level annotation, where the text decoder generates descriptions based on the localized object set [4]. **This leads to the model working like a bag-of-word and a lack of deep semantic understanding.** OpenVik provides **better knowledge grounding** by conditioning the generator on the detected relational region. It includes the ability to **automatically discover the types of interest** that are not only salient but also benefit downstream relational reasoning, as proved in various downstream tasks in Section 5.
- **Language diversity**: Training such a new paradigm for open relational visual knowledge extraction is not trivial. The lack of diverse training data and distribution bias present significant challenges. As shown in Section 4.3 and Figure 3, the **diversity-driven data enhancement** strategies put forth in our work can effectively optimize knowledge richness. They are also generalizable to other models or backbones.
In light of these, our work goes beyond minor modifications of existing models. We have sought to tackle the challenging problem of open visual knowledge acquisition and deep semantic grounding. The proposed generic techniques can also be applied to other models and datasets.
[1] Pont-Tuset, Jordi, et al. "Connecting vision and language with localized narratives." ECCV, 2020.\
[2] Yan, Kun, et al. "Control image captioning spatially and temporally." ACL. 2021.\
[3] Deng, Chaorui, et al. "Length-controllable image captioning." ECCV, 2020.\
[4] Wu, Jialian, et al. "Grit: A generative region-to-text transformer for object understanding." arXiv preprint arXiv:2212.00280 (2022).
>***The comparison baselines should be more explored since the task is quite similar to region captioning***
**A**: We thank the reviewer for the insightful comments. Please refer to our added results under ***Q1*** in the global response.
>***The paper could not tackle open-world knowledge***
**A**: In OpenVik, both objects and relations go beyond a pre-defined set. This is achieved through the design of two core modules:
- Relational detector: The relational detector performs region-level detection instead of object level detection. This is done within a single, holistic detection box supervised by knowledge beneficial for reasoning, allowing for deep semantic and combinational understanding that is not constrained by specific object categories.
- Generative knowledge process: The subsequent knowledge generation is designed to be generative and format-free, not restricted to mere classification or generation around a closed set of objects. This flexibility allows it to capture information in a more open-ended manner, reflecting the true diversity and complexity of real-world visual information.
Additionally, the diversity strategies proposed help to mitigate distribution bias in training data, encouraging the generation of novel knowledge, including in zero- and few-shot scenarios.
>***The implementation of open relational region detector***
**A**: Thanks for pointing out the areas that need further clarification. The training objective of our relational region detector consists of two components: region regression and knowledge supervision, as detailed in lines 106-110.
- **Region Regression**: This part is guided by our newly generated region labels, denoted as $U_j$. As detailed in lines 108-109, the foreground of these relation-centric region labels is created by taking the union of the object-level bounding boxes of the entities, such as *boat*, *water*, contained in a ground truth region knowledge description $T_j$. This forms the region regression loss $L_{RD}$.
- **Knowledge Supervision**: To assist with the refinement of the bounding box, we have replaced the object-centric label classification found in traditional object detectors with knowledge supervision (line 110). A pre-trained generator is fine-tuned to create the regional description for the given region. This is supervised by the ground truth description $T_j$ with loss term $L_K$.
The combined training objective for the relational region detector is $L_v = L_{RD} + L_K$. We recognize that this notion (in Figure 1) was not clearly explained. We are grateful to the reviewer for pointing it out, and we will make this clear in the final version of the paper.
>***What is the input of generator? Is the single region or the whole image?***
**A**: Thank you for the inquiry. OpenVik can handle multiple detected relational regions obtained from the detectors for a given image. As detailed in lines 118-120, each of these regions is processed individually and serves as an input to the generator. Along with the specific region, the generator also receives the ViT representation of the entire image. The detected region is utilized as a binary attention mask, which helps to filter out the background and concentrate solely on one relational foreground at a time.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for the responses.
Several concerns are explained. However, I still think the paper lacks novelty. The design of region detector could not reflect remarkable innovation compared with alternative region detectors. Moreover, the region detector restricts the potential of this task. The region annotation is created by the union region of objects defined in the prior datasets, which could not enlarge the detect sets. I suppose it still couldn't solve the open visual knowledge issue. Therefore, I insist on my original rating.
---
Reply to Comment 1.1.1:
Title: Clarifying the Role of Region Detector in OpenVik and the Major Novelty of This Work
Comment: Thank you for taking the time to provide further feedback.
Firstly about the novelty of region detection, we would like to clarify that the region detector is not claimed as a major novelty in this work. Instead, we adapt an appropriate existing method to provide visual grounding and serve our broader objective of open visual knowledge extraction. The major novelty of this work is on the relational condition framework and subsequent design of generative models for open visual knowledge extraction, which can generate relational knowledge not limited to a specific relation vocabulary, as well as the additional designs to further enhance the openness and richness of the generated knowledge.
Addressing the second concern about the limitation of region detection, the term "open knowledge" in this work means that the knowledge is not confined to entities or relations from a given dataset, which is achieved through a generative model that can continuously accumulate new relational knowledge from different resources. The generative model is firstly pre-trained on large image captioning datasets. To enhance the diversity and novelty of the output knowledge, we leverage external knowledge sources to supplement relation recognition and boost entity perception. As elaborated in Section 3.3, the vocabulary for both entities and relations is enriched using this external knowledge coupled with a commonsense language model. Through this approach, we tackle the open visual knowledge challenge highlighted by the reviewer. Our empirical evaluations, detailed in lines 251-257 and presented in Table 2, highlight OpenVik's capability in extracting visual knowledge of significantly higher diversity compared to benchmarks like Visual Genome and Relational Caps. Region-oriented openness of relational knowledge has not been a concern in this work, which may or may not be a promising direction, and it can be studied in a future work.
In conclusion, the reviewer’s concerns are mainly centered on OpenVik's region detector. We believe that while it is a fundamental component of visual knowledge extraction, it is relatively apart from our major novelty in this work and should not overshadow the primary innovations of our innovative reasoning-driven designs in the knowledge generator and the data enhancement mechanisms. Together, they pioneer a fresh approach to knowledge extraction towards both openness and diversity. We sincerely hope our explanation provides a clearer understanding of the significance and objectives of our work. | null | null | null | null | null | null |
Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models | Accept (poster) | Summary: The authors propose a framework that improves the image classification model's robustness by distilling CLIP models and augmenting adversarial learning with pre-trained generative models. The authors follow classical adversarial learning to generate perturbed examples and then input the examples to VQ-GAN. Lastly, they distill a CLIP model into a smaller network with both normal and augmented adversarial examples. While I have some concerns about the results, extensive experiments verify the effectiveness of the proposed method in many settings.
Strengths: 1. The extensive experiments cover various settings, such as different architectures for teacher and student models and variations of ImageNet.
2. The authors provide some theory explaining that a more diverse distribution could be helpful for distillation.
Weaknesses: 1. The research problem is unclear. In L37, the authors claim to "train small robust models with a large-scale model as a teacher, but without access to the teacher's training data." However, they focus more on out-of-distribution robustness in the later parts and the experiments.
2. The proposed method needs to be better justified. In L118, the authors suddenly introduce generative models to augment adversarial examples and claim that it helps generalizability without any reasoning. If possible, the authors should provide more motivation and insights into it and conduct some experiments.
3. Many notations are not defined. For example, the definition of $\theta(x, x\prime)$ is missing. What exactly does $Q(x\prime)$ work?
4. The novelty is limited. The main contribution seems solely to leverage VQ-GAN to enhance distillation. The reason for choosing VQ-GAN is also missing.
5. The research scope is limited. The authors claim in L45 that distilling large-scale models into smaller models improves adversarial robustness. However, the experiments only consider CLIP models.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. The performance could be better, even though it outperforms prior work. In Table 1, neither the baselines nor the proposed method achieves the CLIP baseline performance. Maybe one should look into cross-modality instead of augmenting query datasets for distillation.
2. What does it mean by *DAD (no distill)*? I recommend the authors refer the readers to the corresponding loss terms.
3. The proposed method is similar to leveraging more data to perform distillation (Eq. 8). The author may like to explore using different kinds of data (e.g., ImageNet + CIFAR-10 or ImageNet + ImageNet with augmentation), which could even outline the contribution of the proposed method.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: 1. The authors only consider distilling CLIP models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments. We will address the concerns about the novelty and justification for our method.
\
\
*The research problem is unclear...they focus more on out-of-distribution robustness in the later parts and the experiments.*
Thank you for pointing this out. To clarify, the goal of our DAD is to improve out-of-distribution robustness on smaller models and datasets by distilling from a large-scale teacher. We point out the distinction between different forms of robustness on L67-69, but agree with the reviewer that this should be clear from the beginning and will revise the statement in the updated paper.
\
\
*The proposed method needs to be better justified...If possible, the authors should provide more motivation and insights into it and conduct some experiments.*
The choice of data augmentation is one of the key components of our proposed method. Since we focus on out-of-distribution robustness, we adopt a generative model to generate data augmentations adversarially to ensure the resulting transformation is semantically meaningful. The VQGAN can apply a semantically meaningful transformation on the image to better capture a useful distribution shift than standard pixel-level perturbations. The effects of not applying a generative model and using common data augmentations can be observed in Table 7 in the Appendix. Here we only used Mixup and RandAugment and demonstrate the results of distilling from CLIP without using the generative model / adversarial examples. The results are much worse, with an average -13.2% decrease in performance on natural distribution shifts.
\
\
*Many notations are not defined. For example, the definition of theta(x,x’) is missing. What exactly does Q(x’) work?*
X refers to the base input (L101) and x’ refers to the augmented input (L104). Theta(x, x’) means passing the image or augmented image through the student model. We will add this notation to the revised paper. Q(x’) indicates the process of discretizing an input image, which we describe in more detail in L145-152. To summarize, after we apply a normal perturbation x -> x’, the VQGAN encodes input x’ into a latent representation which is then decoded, applying a semantic transformation.
\
\
*The novelty is limited. The main contribution seems solely to leverage VQ-GAN to enhance distillation. The reason for choosing VQ-GAN is also missing.*
We kindly emphasize our key contributions below:
1. *Setting.* We are the first to also introduce a robust teacher and use knowledge distillation to further improve robustness. We focus on OOD-robustness due to practical usefulness and the specialization of large-scale models on natural distribution shifts. These models exhibit strong robustness due to their diverse training data, and we are the first to leverage these representations as an additional form of regularization
2. *Objective.* We introduce a novel knowledge distillation objective for the OOD-robustness setting where we add a second KL-divergence term between the student and teacher predictions on the augmented image. Previous works in defensive distillation like ARD use the teacher predictions on the normal image for this second term, but this is not adaptive to semantic transformations.
3. *Data augmentation.* We indeed use a VQGAN for the data augmentation. However, our key novelty here is using the teacher’s gradients to generate adversarial examples. We find that this results in more diverse adversarial examples closer to the teacher’s representations, and our theoretical framework is based on this idea. We choose VQGAN in particular following the prior SOTA DAT, which finds this to lead to the best result. We also try using the newer Stable Diffusion model in the response to Reviewer qyW3, and find that it results in worse performance.
\
*The research scope is limited. The authors claim in L45 that distilling large-scale models into smaller models improves adversarial robustness. However, the experiments only consider CLIP models.*
We focus our analysis on CLIP because of its accessibility and performance. At the time of submission, there aren’t any open-source large-scale vision models better than CLIP. However, we note that DAD is agnostic to the choice of teacher. In fact, we provide results in Table 4 using different teacher models. Generally we find that it is better to use CLIP. We hope to use newer foundation models as they become available.
\
\
*The performance could be better, even though it outperforms prior work...Maybe one should look into cross-modality instead of augmenting query datasets for distillation.*
We propose to use CLIP as an additional source of regularization in lower compute regimes, rather than improving upon CLIP. Data augmentation is easy to apply and can be done across a variety of settings, including in cross-modal training. DAD also improves upon prior data augmentations, especially on natural distribution shifts.
\
\
*What does it mean by DAD (no distill)?...*
This means training with only the L1 cross-entropy loss but with DAD samples, which corresponds to standard training using previously computed DAD samples for augmentation. We will make this clearer in the revised paper.
\
\
*The proposed method is similar to leveraging more data to perform distillation (Eq. 8). The author may like to explore using different kinds of data...*
Indeed we do try simpler data augmentations in Table 7 in the Appendix. Here we use Mixup and RandAugment and find the results are worse. The goal of using the VQGAN and teacher-gradient-based adversarial examples is to generate semantically meaningful and diverse perturbations. We find that these simpler forms of data do not represent difficult real-world transformations as well as DAD. Combining different datasets is certainly an interesting direction to consider.
\
\
We hope our response addressed the reviewer’s questions and concerns. We are happy to answer any further questions.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: I would like to thank the authors for the detailed response.
After reading the response and carefully re-reading the paper, my concern about the research problem in this paper is partially addressed. However, the presentation of the paper remains a main issue. Many descriptions in this paper are unclear or even confusing. I would like to clarify my understanding below before adjusting my score.
1. In L21-26, it looks like the authors plan to work on improving the in-distribution accuracy under adversarial learning. But, it turns out to investigate data augmentation with the *zero-shot generalization* of foundation models (L33).
2. L36-38, again, is confusing. As mentioned in my previous comment, it gives the impression that the goal is solely to distill large models into smaller ones.
3. Is L40-L46 a finding from the authors or cited from other works? Does this only happen to CLIP?
4. Following (3), Table 7 in the appendix is important evidence to motivate the proposed method, where normal augmented data are unsuitable for distilling CLIP models. It is unclear why the authors leave them in the appendix.
5. The work's main contribution is to leverage discretizers, i.e., VQ-GAN, and the idea is highly built upon the prior work DAT. However, relations, detailed comparisons, or motivations are totally missing in this work, making the readers hard to judge the contribution. The operation of VQ-VAE is also missing.
6. Math presentations need to be revised. For instance, where does U come from in L101? Q sometimes takes one input argument but sometimes 2.
I might change the score in the end, but I still find the writing not ready for publication, despite its effectiveness. I strongly recommend rephrasing the introduction if the paper gets accepted.
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer
Comment: We thank the reviewer for the response. We are glad that our rebuttal has addressed the reviewer's concerns especially on the research problem. It is also encouraging to hear that the reviewer is open to adjusting the score. The remaining concerns from the reviewer are on the presentation, and we thank the reviewer for clarifying them. Below we address all these points in further detail, and we will revise the paper accordingly.
\
*In L21-26, it looks like the authors plan to work on improving the in-distribution accuracy under adversarial learning. But, it turns out to investigate data augmentation with the zero-shot generalization of foundation models (L33).*
In this section, our intent was to establish the motivation for DAD. We kindly refer the reviewer to L36, where our motivation is explained as combining the strengths of regularization and foundation models. Before L36, we point out the weaknesses of either approach. In L21-26, we argue that prior regularization techniques like adversarial training reduces in-distribution accuracy (L25) or does not transfer well to out-of-distribution robustness (L26). Therefore, our focus is on distilling foundation models through data augmentation. We realize that the current presentation in paragraph L21-26 could be confusing. We will revise this paragraph to make it clearer by 1) highlighting the deficiencies of current data augmentation methods in particular, and 2) moving the discussion on adversarial robustness to the Related Work section.
\
*L36-38, again, is confusing. As mentioned in my previous comment, it gives the impression that the goal is solely to distill large models into smaller ones.*
We see the reviewer’s concern. For clarity, it may be helpful to separate the methodology from the goal. Broadly, our goal is indeed to improve out-of-distribution robustness, as the reviewer previously noted. But our novelty and methodology are based on distilling from large models. We will make this clearer by revising L36-38 in the following way:
- In this paper, we aim to connect these two lines of work. Our goal is to improve the out-of-distribution robustness of small models without large-scale training. To this end, we introduce a foundation model as a teacher to improve the diversity of data augmentation and directly distill robust representations.
\
*Is L40-L46 a finding from the authors or cited from other works? Does this only happen to CLIP?*
The finding is from us. The ability to transfer robustness using vanilla knowledge distillation is our finding and unexplored in prior works.
The finding does not only happen to CLIP. We observe in Table 8 in the Appendix that this finding generalizes across different teacher architectures.
\
*Following (3), Table 7 in the appendix is important evidence to motivate the proposed method, where normal augmented data are unsuitable for distilling CLIP models. It is unclear why the authors leave them in the appendix.*
We left this result in the Appendix due to lack of space, but we agree that it is important. So we will add one of the main results of Table 7 (CLIP can distill robustness) as a figure in the introduction, and we will also place it in the experimental section as an ablation for using common data augmentations to demonstrate the need for the generative model.
---
Reply to Comment 1.1.2:
Title: Response to the Reviewer cont.
Comment: **continued from the previous response*
\
*The work's main contribution is to leverage discretizers, i.e., VQ-GAN, and the idea is highly built upon the prior work DAT.*
We would like to respectfully clarify that although we do draw upon the idea of using VQ-GAN from DAT, we additionally contribute a new setting, knowledge distillation objective for out-of-distribution robustness, and data augmentation generated using teacher gradients. We refer the reviewer to the initial response to concerns about novelty for more detail.
*However, relations, detailed comparisons, or motivations are totally missing in this work, making the readers hard to judge the contribution.*
- We already provided relations to prior uses of generative models for regularization in L79-83.
- The motivation for using a discretizer in our setting can be found in L117-121.
- While we do not provide detailed comparisons to other discretizers in the paper, we refer in L145-146 that this is due to the direct use of VQ-GAN from DAT where this is already ablated in Table 5 [1]. In the rebuttal phase, we also previously provided an additional result for Reviewer qyW3 on Stable Diffusion that demonstrates the need for VQ-GAN, and we will add this result as an additional ablation in the revision. We also list the result here:
| | ImageNet | V2 | R | Sketch | A | Avg |
| --- | --- | --- | --- | --- | --- | --- |
| Stable Diffusion | 79.1| 67.8 |45.9 |33.4 | 22.0| 49.6 |
| VQGAN | 79.6 | 69.9 | 65.1 | 46.1 | 31.8 | 69.5 |
\
*The operation of VQ-VAE is also missing.*
We described the operation of VQ-GAN in L145-150. For additional clarity, we will add details in the revised paper about how VQ-GAN is trained, why an image-to-image discretizer is necessary in particular, and how it is used in DAD to generate adversarial examples.
\
*Math presentations need to be revised. For instance, where does U come from in L101? Q sometimes takes one input argument but sometimes 2.*
We define U as the set of all transformed images (L101-102). We checked the relevant sections and did not find any particular instances where Q takes more than one input, does the reviewer mean $\theta$ ? We use two input arguments for simplicity for the student model $\theta$ in Eqs. 4 and 5. We realize this could be confusing and will expand the equation to include terms for both the clean and perturbed inputs like so.
Eq. 4
min $E_{(x,y)\sim P}~[ ~l(\theta(x), y) ~+ $ max $l(\theta(x'), y) ~]$
Eq. 5
min $E_{(x,y)\sim P}~[ ~l(\theta(x), y) ~+ $ max $l(\theta(Q(x')), y) ~]$
\
We hope our response addressed the reviewer’s remaining concerns. We are happy to answer further questions.
\
[1] Xiaofeng Mao, Yuefeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, Shaokai Ye, Xiaodan Li, Rong Zhang, and Hui Xue. Enhance the visual representation via discrete adversarial training. arXiv preprint arXiv:2209.07735, 2022.
---
Reply to Comment 1.1.3:
Title: Follow up to the Reviewer
Comment: Hello,
As the discussion period is closing soon, we’d like to follow up and see if the Reviewer has had a chance to consider our response. We hope the Reviewer can raise the score if the concerns were addressed. | Summary: This paper proposes a simple and lightweight framework for improving the robustness of vision models through knowledge distillation. This paper applies pre-trained teacher model to generate adversarial examples and apply VQGAN as a data augmentation method to generate more informative adversarial samples. This paper provides a theoretical framework for applying robust teacher in the knowledge distillation with data augmentation.
Strengths: +The paper applies a large pre trained model based on knowledge distillation to enhance the model's out-of-distribution robustness with data augmentation, this idea is novel.
+The paper provides a theoretical analysis of how to best distill robustness from a robust teacher trained on a large-scale dataset.
+The experiment of the paper can prove the effectiveness of the method.
Weaknesses: -The motivation for applying VQGAN is not so clear. The article claims that VQGAN can discretize adversarial examples, but can other data augmentation technology such as common data augmentation methods (translation, cropping), other variants of GAN, or Stable Diffusion achieve better performance? This part seems to be a direct application, and I think a discussion of the necessity should be provided.
-Although this paper has a pretty performance of out-of-distribution robustness, I wonder about the performance of adversarial robustness (such as PGD or AutoAttack).
-The frame diagram is somewhat rough and crude (this will not affect the final rating, but I hope the author can make some modifications).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see the weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful suggestions and positive review. We are glad the reviewer found our idea novel and think the experiments and theory prove its effectiveness.
\
\
*The motivation for applying VQGAN is not so clear. The article claims that VQGAN can discretize adversarial examples, but can other data augmentation technology such as common data augmentation methods (translation, cropping), other variants of GAN, or Stable Diffusion achieve better performance? This part seems to be a direct application, and I think a discussion of the necessity should be provided.*
The motivation for applying VQGAN is to ensure perturbations are semantic to better match real-world transformations. This allows us to focus on an out-of-distribution robustness setting. This is motivated by several works that use generative models to construct data augmentations; we adapt the choice VQGAN from the prior SOTA DAT, which finds VQGAN performs best. The VQGAN can apply a semantically meaningful transformation on the image to better capture a useful distribution shift than standard pixel-level perturbations.
The effects of not applying a generative model and using common data augmentations can be observed in Table 7 in the Appendix. Here we use Mixup and RandAugment which are components of AugReg and demonstrate the results of distilling from CLIP without using the generative model / adversarial examples. The results are much worse, with an average -13.2% decrease in performance on natural distribution shifts.
Following the reviewer's suggestion, we also provide the results on Stable Diffusion, a newer model since DAT was proposed. We use the generic prompt "A photo of an {object}". We observe a significant decrease in performance when trained using DAD compared to VQGAN. We hypothesize that VQGAN is better suited for the image-to-image task of discretizing images than a text-to-image model. Perhaps modifying the text prompt for could boost performance and be an interesting avenue for future work, especially since CLIP also requires a text prompt.
| | ImageNet | V2 | Rendition | Sketch | A | Avg |
| --- | --- | --- | --- | --- | --- | --- |
| Stable-Diffusion | 79.1 | 67.8 | 45.9 | 33.4 | 22.0 | 49.6 |
| VQGAN | 79.6 | 69.9 | 65.1 | 46.1 | 31.8 | 69.5 |
\
\
*Although this paper has a pretty performance of out-of-distribution robustness, I wonder about the performance of adversarial robustness (such as PGD or AutoAttack).*
This is an interesting observation. We evaluate our ResNet50 checkpoint on FGSM. We observe an improvement on adversarial robustness compared to both standard training and DAT. We will provide results on DamageNet and ViT-B in the discussion phase.
| Method | FGSM |
|--------------|-------|
| ResNet50 | 23.5% |
| ResNet50-DAT | 33.0% |
| ResNet50-DAD | 43.5% |
\
*The frame diagram is somewhat rough and crude (this will not affect the final rating, but I hope the author can make some modifications).*
Thank you for pointing this out, we have uploaded an updated figure in the uploaded PDF.
\
\
We hope our response addressed the reviewer’s questions and concerns. We are happy to answer any further questions or provide additional results.
---
Rebuttal 2:
Title: Follow up on adversarial robustness ablation
Comment: The reviewer asked about the adversarial robustness of models trained with DAD. We previously offered a simple result where there is improved small improvement in robustness against FGSM for ResNet50 and now have results on PGD and AutoAttack and on ViT-B. Below is the updated table with a more comprehensive evaluation.
| Method | FGSM | PGD | AutoAttack |
| --- | --- | --- | --- |
| ResNet50 | 23.5 | 1.0 | 0.0 |
| ResNet50 DAT | 33.0 | 5.9 | 0.0 |
| ResNet50 DAD | 43.5 | 12.6 | 0.0 |
| ViT-B | 49.4 | 24.7 | 0.0 |
| ViT-B - DAT | 64.9 | 26.2 | 0.0 |
| ViT-B - DAD | 47.2 | 25.0 | 0.0 |
There is a small improvement in adversarial robustness for simpler attacks, but neither DAT or DAD is robust to AutoAttack as the VQ-GAN discretizes the perturbation. We observe that DAT is stronger than DAD for ViT-B. Unlike out-of-distribution robustness, since adversarial robustness is based on perturbations generated with gradients from the base model, DAT models are trained on images closer to these perturbations than DAD models (which were trained on perturbations generated with CLIP gradients). However, for ResNet50, DAD is better even for adversarial robustness as distillation is able to help smaller capacity models learn discrete adversarial examples, like we observe in Table 1.
We hope this result is helpful and better answers the reviewer's question. We would be happy to answer any further questions. | Summary: The paper introduces a new method to improve the robustness of vision models through knowledge distillation by leveraging a large-scale pre-trained teacher model (CLIP) and a VQ-GAN discretizer. The paper does a great job motivating the method and arguing for knowledge distillation when tackling out of distribution robustness. It also includes extensive theoretical justifications. And finally, the paper introduces and evaluates a novel distillation objective ("DAD"). In the proposed method ("DAD") the student model is trained to minimize a loss made up of a standard cross-entropy loss, a KL to the teacher's predictions (i.e. standard knowledge distillation) and an additional KL between the teacher and student's predictions on additional (teacher-adversarial) data points (which is novel). The paper also includes a reasonable evaluation section for the method with overall positive results, while highlighting some considerable performance improvements on natural distribution shifts over baselines.
Strengths: The paper is relatively well written and easy to follow. The paper introduces a relatively novel and interesting knowledge distillation loss which should have increasingly wider applicability and grow in significance as more and more of the ML world is moving towards leveraging pre-trained foundation models.
Weaknesses: - Missing paragraph detailing DAT: since a large proportion of the strongest results seem to be relying on combining DAD with DAT. The paper should also include a description of DAT's loss and the "DAT + DAD" loss written out (in the appendix if no space available in the main body).
- The paper mentions the method has "negligible computational overhead" -- I strongly disagree. Computing adversarial examples for a CLIP model that also passes them through VQGAN is not negligible. One can say that the computation cost is amortized though -- as you only have to do it once and then read it from disk.
- Missing "Avg" column in Table 2 and 3, and best results are not shown in bold in Table 2.
- Loss is unclear (Eq 9) -- please expand and clarify this to show which variables are being optimized over, at which stage, which are the constraints exactly, and specify the losses (what is l_1 and l_2 precisely - these are only mentioned in passing in line 125-126 being "typically"/"usually" CE and KL); and also please use or remove "n", as currently it is not used to index anything.
- Missing references, e.g., on [1] (on worst case risk over a population of distributions) and [2] (for a related formulation of a similar set of assumptions and similar theoretical conclusion):
[1] Arjovsky et al., Invariant Risk Minimization.
[2] Calian et al., Defending against image corruptions through adversarial augmentations.
- Figure 1 does not seem to be faithful depiction of the method as described in the paper, so it should ideally be improved significantly (the caption could also give more details towards explaining the method).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - At a high level, looking at equation (9) of the full model loss, I'm trying to understand how each component affects downstream robustness. So, I'd like to able to map the contribution of each of the two "l_2" losses and the way in which the adversarial example is computed to specific results. This seems like a pretty straightforward and important ablation. Table 3 and Table 5 present just a slice of this ablation as far as I can tell. Could you please clarify how one can reconstitute this ablation from the results presented in the paper?
- The adversarial examples could also not be computed but rather perturbations could be sampled instead (while still using the VQGAN), or random data augmentation methods (or just an identity function used, i.e. which would correspond to just dropping the last l_2 loss in Eq 9). (From Table 6 it seems that more iterations actually results in worse performance across many metrics.) Have you experimented with this?
- In Table 1, does the row "DAT + DAD (Ours)" refer to training with DAT & DAD on top of AugReg-ViT[55]? If yes, please mark it with a leading "+" for clarity.
- In line 303, the paper states "These adversarial examples are generated with DAD" -- was "DAT" intended?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the method's limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the extensive and positive review and helpful suggestions. We are glad the reviewer found our proposed method novel and relevant.
\
\
*Missing paragraph detailing DAT: since a large proportion of the strongest results seem to be relying on combining DAD with DAT...*
Thank you for bringing this to our attention, we will formally write out the objectives for all compared methods in the revised paper. DAT follows the standard adversarial training objective, but with discretized samples:
\
\
$CE(x, y) + CE(x’,y)$
\
\
We combine DAT with DAD by adding the second L1 objective from DAT to the DAD objective. The full loss is:
\
\
$CE(x, y) + KL(\theta(x), \phi(x)) + KL(\theta(x’), \phi(x’)) + CE(x’,y)$
\
\
\
*The paper mentions the method has "negligible computational overhead" -- I strongly disagree. Computing adversarial examples for a CLIP model that also passes them through VQGAN is not negligible. One can say that the computation cost is amortized though -- as you only have to do it once and then read it from disk.*
We apologize for not making the claim clearer. We agree that generating adversarial examples with a VQGAN is not negligible. But as the reviewer suggests, the cost over a full training run is amortized. We generate the set of adversarial examples once from the frozen teacher and reuse it for all of our training. For training a ViT-B model for 300 epochs, this cost is essentially reduced by a factor of 300. We also use the straight-through gradient applied in DAT, so the actual cost of generation is the same, but because DAT generates these examples every epoch, DAD is significantly cheaper. For a more formal analysis, we follow the methodology in FastAdvProp [1]. The only major additional cost is training on the new adversarial examples. We will also update the claim in the revised paper to make the statement more explicit.
| Method | Attack Steps | Training Budget |
|----------------------|--------------|-----------------|
| ImageNet | 0 | 1x |
| Adversarial Training | 10 | 11x |
| AdvProp | 5 | 7x |
| AdvProp | 1 | 3x |
| DAT | 1 | 3.5x |
| DAD | 1 | 2x |
\
*Missing "Avg" column in Table 2 and 3, and best results are not shown in bold in Table 2.*
We have uploaded an updated table in the PDF and will include this in the revised paper.
\
\
*Loss is unclear (Eq 9)...and also please use or remove "n", as currently it is not used to index anything.*
We apologize for the confusion. We will remove “n” from this loss. We provide the loss here.
$CE(x, y) + KL(\theta(x), \phi(x)) + KL(\theta(x’), \phi(x’))$
\
\
*Missing references...*
Thank you for bringing these references to our attention. We will cite them in the revised paper.
\
\
*At a high level, looking at equation (9) of the full model loss, I'm trying to understand how each component affects downstream robustness...Could you please clarify how one can reconstitute this ablation from the results presented in the paper?*
We apologize for not making the objectives of the baselines and ablations explicit. We will add this to the revised paper. Yes, the results from ablating different components of DAD can be reconstruction from our experiments.
\
\
$L1$
$CE(x, y)$
This corresponds to standard training on ImageNet and is the simplest baseline we compare to in Table 1.
\
\
$L1 + L2_1$
$CE(x, y) + KL(\theta(x), \phi(x))$
This corresponds to standard knowledge distillation. We observe the results of this objective on various student/teacher architectures in Table 7. We find that this simple objective is enough to transfer a degree of robustness from a variety of teachers.
\
\
$L1 + L2_1 + L2_2$
$CE(x, y) + KL(\theta(x), \phi(x)) + KL(\theta(x’), \phi(x’))$
This is our proposed method DAD.
\
\
$CE(x, y) + CE(x’, y)$
To separate L2_2 and see the direct effect of using DAD adversarial examples, we point the reviewer to Table 5. Here we use the DAT objective with precomputed DAD samples. We find that training on DAD adversarial examples without distillation can also improve performance on natural distribution shifts but to a smaller extent.
\
\
*The adversarial examples could also not be computed but rather perturbations could be sampled instead (while still using the VQGAN), or random data augmentation methods (or just an identity function used, i.e. which would correspond to just dropping the last l_2 loss in Eq 9)...*
We thank the reviewer for the interesting suggestion. In fact, we already provide results using common data augmentations. This can be observed in Table 7 in the Appendix. Here we use Mixup and RandAugment, which are components of AugReg, and demonstrate the results of distilling from CLIP without using the generative model. Since these augmentations replace the base image and do not add additional samples, the two L_2 losses are combined. The results are much worse, with an average -13.2% decrease in performance on natural distribution shifts. We will provide the results of sampling the VQGAN in the discussion phase.
\
\
*In Table 1, does the row "DAT + DAD (Ours)" refer to training with DAT & DAD on top of AugReg-ViT[55]?*
Yes, that is correct, DAT + DAT is also on top of AugReg. We will update the table with the “+”
\
\
*In line 303, the paper states "These adversarial examples are generated with DAD" -- was "DAT" intended?*
No, the original statement is correct. The goal of this ablation is to see how DAD examples improve over DAT examples.
\
\
We hope our response addressed the reviewer’s questions and concerns. We are happy to answer any further questions.
Jieru Mei, Yucheng Han, Yutong Bai, Yixiao Zhang, Yingwei Li, Xianhang Li, Alan Yuille, and Cihang Xie. Fast advprop. ICLR, 2022
---
Rebuttal Comment 1.1:
Title: Follow up on data augmentation ablation
Comment: The reviewer asked about sampling from the VQGAN without computing the gradient. As promised in the initial response we provide an additional result which was not finished then on sampling from the VQGAN without using gradients.
The results are significantly worse than DAD, indicating the need to use gradients to discover diverse samples. This is also supported by our theoretical analysis that indicates more diverse adversarial examples are better for robustness. Higher in-distribution performance also suggests the samples are less diverse.
| | ImageNet | V2 | R | Sketch | A | Avg |
| --- | --- | --- | --- | --- | --- | --- |
| VQGAN - Sample | 80.9 | 70.1 | 49.3 | 34.9 | 24.0 | 51.8 |
| VQGAN - Grad | 79.6 | 69.9 | 65.1 | 46.1 | 31.8 | 69.5 |
We hope this result is helpful and improves our response to the reviewer's question. | Summary: This paper introduces a novel approach called discrete adversarial distillation (DAD) to train small, robust models using large-scale models as teachers. The authors establish the knowledge distillation framework for out-of-distribution robustness and provide a theoretical framework for utilizing large-scale models as teachers. DAD achieves state-of-the-art performance on natural distribution shifts, surpassing adversarial training and traditional distillation techniques.
Strengths: Overall, this paper is also well-written and clearly structured; the authors provide both theoretical analysis and empirical experiments to verify their approach. The reviewer would like to list some strengths of the work.
Firstly, it extends the knowledge distillation framework to the domain of out-of-distribution robustness, opening up new possibilities for training small, robust models that can generalize effectively across diverse populations and environments.
Secondly, this paper highlights the advantages of leveraging large-scale models as teachers, providing a theoretical framework to support this approach. This understanding can inform the development and utilization of robust models in practical applications.
Lastly, the experimental results showcase state-of-the-art performance on natural distribution shifts, surpassing existing techniques. This signifies the potential of the proposed DAD method in addressing the challenges of robustness and generalization in ML models.
Weaknesses: The reviewer found it hard to understand the main idea at first glance, the paper could improve its clarity in figures, as well as in presenting the assumptions and theoretical analysis, and its connection to the proposed method. Explicitly linking the theoretical framework to the practical implementation of the DAD method would help readers grasp the underlying principles and motivations behind the proposed approach more effectively.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please refer to the weakness. In addition, why the authors did not provide the code for review but answered the *Reproducibility* as "yes"? Please provide the code during the rebuttal period, using Anonymous GitHub is a viable option. Also, the authors claim that the proposed only brings negligible computational overhead, but the reviewer could not find the part that addresses computing resources; what exactly are the computational costs for the proposed framework?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments and positive review. We are glad the reviewer agreed with the core contributions of our setting and the importance of small, robust models.
\
\
*The reviewer found it hard to understand the main idea at first glance, the paper could improve its clarity in figures, as well as in presenting the assumptions and theoretical analysis, and its connection to the proposed method. Explicitly linking the theoretical framework to the practical implementation of the DAD method would help readers grasp the underlying principles and motivations behind the proposed approach more effectively.*
1. We have attached an updated main figure in the PDF.
2. In Sec 3.4, our goal is to establish the conditions where robustness is achieved through data augmentation / adversarial training. We find that the diversity of an adversarial sample (measured by Wasserstein distance from the training distribution) depends on the diversity of the model’s training distribution, so we propose using the large-scale teacher’s gradients to generate this. This can be mapped to L163 in Eq. 9 if we substitute α(ϕ(P1)) for DAD and α(ϕ(P2)) for DAT in Lemma 3.4. We will revise the section with this explicit notation.
3. For a qualitative result linking our theory and implementation, we kindly point the reviewer to the Appendix, where we provide figures that demonstrate the relationship between the magnitude of the distribution shift to performance. In figure B, we find that contrary to standard ImageNet training and DAT, DAD has improved performance when the Wasserstein distance of the evaluation distribution is higher. This is also shown in figure C, where we show that the improvement over standard training with DAD is higher than with DAT. This reflects our theoretical conclusions that DAD adversarial examples are more informative. We moved this result to the Appendix due to space, but agree with the reviewer that demonstrating this connection is important and will add this to the main paper in the revised version.
\
*In addition, why the authors did not provide the code for review but answered the Reproducibility as "yes"? Please provide the code during the rebuttal period, using Anonymous GitHub is a viable option*
Thanks for bringing this to our attention, we will attach the code following rebuttal guidelines.
\
\
*Also, the authors claim that the proposed only brings negligible computational overhead, but the reviewer could not find the part that addresses computing resources; what exactly are the computational costs for the proposed framework?*
We apologize for not being clear about this claim. We have attached a table with the results of an empirical analysis following the methodology in FastAdvProp [1] of the full computational cost of DAD. To clarify, what we mean by “negligible computational overhead” is the cost of generating adversarial examples, which is the largest cost of adversarial training. Compared to standard adversarial training and DAT, since we generate our samples from a frozen teacher and reuse them, the cost of generation over a full training run becomes negligible. The only major additional cost over standard training is training on the new examples, which is also shared by other forms of adversarial training. We will update the claim with the precise statement in the revised paper.
| Method | Attack Steps | Training Budget |
|----------------------|--------------|-----------------|
| ImageNet | 0 | 1x |
| Adversarial Training | 10 | 11x |
| AdvProp | 5 | 7x |
| AdvProp | 1 | 3x |
| DAT | 1 | 3.5x |
| DAD | 1 | 2x |
\
We hope our response addressed the reviewer’s questions and concerns. We are happy to answer any further questions.
[1] Jieru Mei, Yucheng Han, Yutong Bai, Yixiao Zhang, Yingwei Li, Xianhang Li, Alan Yuille, and Cihang Xie. Fast advprop. *ICLR*, 2022
---
Rebuttal Comment 1.1:
Comment: I have carefully read the authors' rebuttal and reviewed the submitted code. I thank the authors for their effort and time in addressing my questions. I have no further comments. | Rebuttal 1:
Rebuttal: We are thankful for the generally positive reviews and useful feedback. We are glad reviewers found our idea novel (Reviewers gGrQ, Y3Hr, qyW3), setting significant (Reviewers gGrQ, Y3HR), and analysis convincing (Reviewers GKv1, qyW3, ktZ5). We provide an updated main figure, updated Table 2, and additional visualizations of DAD adversarial examples. To address specific concerns, we also provide additional experimental results on using Stable Diffusion, computational cost, and adversarial robustness.
Pdf: /pdf/c8401f4ca29a713e70737a3bcac116bf988b8327.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper presents a knowledge distillation framework for vision model, by leveraging a teacher model (CLIP). In the setup, a discretizer (VQGAN) model is introduced to the adversarial examples from the teacher model. Adversarial training (AT) objective is adapted in the knowledge distill setting. The proposed discrete adversarial distillation (DAD) demonstrate improved performance in out-of-distribution tasks, comparing to other AT approaches.
Strengths: This paper present an interesting approach for knowledge distillation for large scale vision model. The experiment is thoroughly conducted with various of SOTA approaches compared, and comprehensive ablation study. The writing is easy to follow and the idea is well established.
Weaknesses: 1. One main concern is on comparing the proposed method with DAT. In DAT, the idea of discretizer is introduced and VQGAN was also considered in their framework. In their paper, various tasks were evaluated, including image classification, self-supervised learning, and object detection. To me, DAD seems to be applying the similar methodology, but on a OOD setup.
2. In the experiment, it seems like DAT has achieved competitive results in ImageNet-1K, and even better results in ImageNet-21K. What is the key advantage of DAD over DAT?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How come DAD+DAT performs worse than DAT in some cases?
2. Are there any qualitative results? what are the drawbacks from previous model(s) that are better addressed by the newly proposed DAD?
3. Have you try training the teacher model in parallel with the student model, not just provide adversarial samples from offline manner?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments and positive review. We are glad the reviewer found our idea well established, experiments thorough and comprehensive, and writing easy to follow. We address the concerns about the comparisons and results below.
\
\
*One main concern is on comparing the proposed method with DAT...To me, DAD seems to be applying the similar methodology, but on a OOD setup.*
Although DAD draws upon the use of a VQGAN for data augmentation, the methodology is distinct in three key ways.
1. *Setting.* DAT follows a standard data augmentation setting to improve robustness. We are the first to also introduce a robust teacher and use knowledge distillation to further improve robustness. We focus on OOD-robustness due to practical usefulness and the specialization of large-scale models on natural distribution shifts. These models exhibit strong robustness due to their diverse training data, and we are the first to leverage these representations as an additional form of regularization
2. *Objective.* We introduce a novel knowledge distillation objective for the OOD-robustness setting where we add a second KL-divergence term between the student and teacher predictions on the augmented image. Previous works in defensive distillation use the teacher predictions on the normal image for this second term, but this is not adaptive to semantic transformations.
3. *Data augmentation.* This is the aspect of DAD that is adapted from DAT. However, our key novelty here is using the teacher’s gradients to generate adversarial examples. We find that this results in more diverse adversarial examples closer to the teacher’s representations, and our theoretical framework is based on this idea. This also makes DAD much cheaper than DAT since we only generate the adversarial examples once.
\
*In the experiment, it seems like DAT has achieved competitive results in ImageNet-1K, and even better results in ImageNet-21K. What is the key advantage of DAD over DAT?*
Although DAT performs well, we observe distinct improvements in DAD across distributions and especially on natural distribution shifts.
- On average, there is an improvement of 2.7% for ViT-B and 4.2% for ResNet50 on ImageNet-1K across 7 distribution shifts.
- Also following Reviewer Y3Hr’s suggestion, we updated Table 2 with the average performance for clarity. *DAD has higher results than DAT on ImageNet-21K as well.* We attached the updated table in the uploaded PDF.
- DAD has a significant improvement of (avg +10.3% for ViT-B and avg +7.1% for ResNet50) on natural distribution shifts (IM-A, IM-R, IM-Sketch). The key advantage of DAD over DAT is that it is able to leverage the robustness of a pretrained foundation model. This is also the likely cause for worse performance on synthetic distribution shifts. In fact, the CLIP teacher performs worse than DAT training on ImageNet-C (-15.5%) and ImageNet-Stylized (-4.6%) due to being trained only on natural images, making it difficult to surpass DAT on these distributions with a distillation-based approach.
\
*How come DAD+DAT performs worse than DAT in some cases?*
We found that combining DAD with DAT resulted in higher in-distribution accuracy and accuracy on synthetic distribution shifts, but at the cost of lower performance on natural distribution shifts than DAD. In practice, DAD targets natural distribution shifts while DAT performs better on synthetic corruptions. Due to CLIP’s diverse training set, our theoretical framework in Sec 3.4 shows CLIP’s adversarial examples encode similar representations. Combining them may dilute the specialization that leads to their strong individual performance. We do note that DAD+DAT still outperforms DAT on natural distribution shifts.
\
*Are there any qualitative results? what are the drawbacks from previous model(s) that are better addressed by the newly proposed DAD?*
Yes we included qualitative results:
1. We add visualizations of the generated images for DAD in the uploaded PDF. Both DAT and DAD adversarial examples are computed with the same budget. We select sets where the DAT image is classified correctly but the DAD image is classified incorrectly to highlight the difference. DAD images appear to be more graphic, displaying a greater variety of distribution shifts than the corresponding DAT image.
2. We kindly point the reviewer to the Appendix where we provide graphs in Figures A-C on page 15 describing the connection to our theoretical framework. Here we demonstrate the relationship between Wasserstein distance of the evaluation distribution and performance. In figure C, we show that for a test distribution, DAD has a larger improvement over standard ImageNet training than DAT the larger the Wasserstein distance between the test distribution and training distribution is. This suggests DAD can better address harder distribution shifts like natural distribution shifts (IM-A, IM-R, IM-Sketch), which are also closer real-world use cases than simple synthetic modifications to the base image (IM-C, IM-Style).
3. DAD is also cheaper than DAT and AT. We find that DAT is only 2x as expensive as standard training, compared to 3.5x for DAT and 11x for AT.
\
*Have you try training the teacher model in parallel with the student model, not just provide adversarial samples from offline manner?*
This is an interesting suggestion and something we started to try before deciding not to pursue further. Joint training certainly has the potential to discover harder and more diverse examples, but at a higher computational cost of updating two models. In addition, it is still unclear what the best way to apply adversarial training to CLIP is, especially on only ImageNet data. We found that training the teacher led to instability, but this is definitely an idea that could lead to further improvements.
\
We hope our response addressed the reviewer’s questions and concerns. We are happy to answer any further questions. | null | null | null | null | null | null |
Patch Diffusion: Faster and More Data-Efficient Training of Diffusion Models | Accept (poster) | Summary: This paper presents PatchDiffusion, a novel framework designed to address the scalability challenges faced by most diffusion models in terms of training and sampling. The proposed framework adopts a patch-wise training approach, where a denoising network is trained on image patches rather than the entire high-resolution images. To generate patches at the target resolution, PatchDiffusion leverages progressive or stochastic scheduling techniques that utilize different patch sizes throughout the training process. Experimental results on a small-scale dataset demonstrate that PatchDiffusion not only enhances the quality of generated samples but also reduces the overall training time. Furthermore, from the experiment results for medium-scale datasets, PatchDiffusion could be a resource-efficient solution for diffusion training methods.
Strengths: This paper tackles a significant challenge in training diffusion models, which often involve substantial computational costs. To address this issue, the authors propose PatchDiffusion as an optional solution that can be seamlessly integrated into any diffusion model pipeline, regardless of the chosen backbone, sampler, or other modules within the pipeline.
I think that PatchDiffusion operates as a form of data augmentation, thereby enhancing the generation quality of the model. This approach provides additional benefits beyond reduced computational costs.
Weaknesses: [Limited experiments - important baseline missing and low performance]
The primary motivation behind patch-wise training is to reduce computation costs during training. Given this motivation, it might be worth considering a more efficient backbone instead of U-Net. Recently, successful approaches have replaced U-Net with ViT, as demonstrated in the papers "Scalable Diffusion Models with Transformers" (arXiv'22) and "All are Worth Words: A ViT Backbone for Diffusion Models" (CVPR'23).
In the case of DiT, reducing the latent resolution by patchifying with a 2x2 patch size has shown improvements in training scalability and performance. It is worth exploring whether applying patch-wise training to DiT and U-ViT backbones could yield better results. However, the potential gains from such an approach are uncertain.
Additionally, it is important to note that in class-conditional image generation on ImageNet-1K, the FID score of PatchDiffusion appears to be significantly worse than that of DiT and other related works. While the current state-of-the-art methods achieve an FID score of less than 4, this work reports a score of around 7.65.
[Writing needs to be improved. ]
The writing quality of the paper could be improved further, especially in terms of highlighting the comparison between this work and the baselines. It would be beneficial to include a figure illustrating the trade-off between FID (or any other measure of generation quality) and FLOPs. This would help in understanding how PatchDiffusion and other efficient diffusion backbones compare to each other. The authors can refer to similar trade-off figures presented in the DiT and U-ViT papers.
Evaluating the quality of writing can be subjective, and I am open to hearing other reviewers' comments on this matter.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: #1. The primary objective of this work is to minimize the computational cost of training/inference through patch-wise training. However, it could be worthwhile to consider an alternative solution by employing a more efficient backbone inherited from ViT, which incorporates a patch-fying (i.e., tokenizing) module. Including a comparison of this work with DiT or U-ViT in the paper would further highlight the benefits of this approach. Moreover, presenting training compute vs. FID plots for the comparison would greatly assist in determining the most effective approach.
#2. In my understanding, the true advantages of this work are likely to be demonstrated through experiments on fine-tuning. This approach has the potential to reduce the fine-tuning cost for any pre-trained diffusion backbone. In this context, incorporating LoRA-type methods could complement this approach. Including empirical analysis of this nature in the paper would enhance the understanding of the benefits of this work.
#3. The GAN community has explored patch-wise training in various ways. Recently, Any-resolution GAN (ECCV’22) has been introduced as a promising solution for training generative models with variable-size images. Although this paper primarily aims to reduce training costs rather than utilizing multiple size images in the training procedure, it would be interesting to explore whether the proposed framework can be extended to generate variable-size images. Doing so would further highlight the benefits of this framework.
#4. A minor comment regarding the inclusion of "Appendix A." To provide a concise overview of the theoretical interpretation of the patch-wise training scheme, it would be beneficial to incorporate a brief summary of these observations within the main body of the paper.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The limitations of this work were not explicitly outlined, and I didn’t observe any discussion regarding potential negative societal impacts. However, there is no clear negative societal impact, since all experiments are conducted in controlled benchmark datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer NLEo for providing constructive suggestions. Below, we address each concern raised in your comment point by point. Please let us know if you have any further questions or whether this adequately addresses all these issues.
> Q1+Weakness 1: it could be worthwhile to consider an alternative solution by employing a more efficient backbone, such as DiT. Also, plotting training compute vs. FID for the comparison would greatly assist in determining the most effective approach. ...
We acknowledge that both U-Net based and transformer-based diffusion models have attracted significant attention and exhibited strong performance. As of now, there is no conclusive evidence favoring one over the other. Notably, U-Net based diffusion models continue to dominate text-to-image generation, as evidenced by their usage in Stable Diffusion, Imagen, and DALL-E 2.
In reference to your statement, "Given this motivation, it might be worth considering a more efficient backbone inherited from ViT," we respectfully present a differing perspective. Quoting the DiT paper, "Figure 2 (right) demonstrates the computational efficiency of DiT-XL/2 (**118.6 Gflops**) relative to latent space U-Net models like LDM-4 (**103.6 Gflops**) and notably more efficient than pixel space U-Net models like ADM (1120 Gflops) or ADM-U (742 Gflops)." In the context of being applied to latent space, U-Net proves to be more efficient than Transformer architecture. In our ImageNet-1K experiment, Patch Diffusion employs ADM-U on latent space (1x4x32x32) with a considerably lower computational cost of around **27 Gflops**, demonstrating its efficiency. It is pertinent to note that our model possesses approximately 290M parameters, whereas DiT-XL/2 boasts 675M parameters. As elucidated in Table 4 of the DiT paper, DiT model exhibits notably inferior performance on ImageNet-1K if it takes less than 300M parameters.
Moreover, transformer-based diffusion models typically necessitate a substantially higher number of training steps to converge. Table 4 of the DiT paper exemplifies this, as DiT-XL/2 requires 7M steps to achieve convergence, while the U-Net model utilized in our work reaches convergence within 0.61M steps.
> Weakness 2: the current state-of-the-art methods (DiT) achieve an FID score of less than 4, this work reports a score of around 7.65.
The performance reported in Table 2 of our manuscript is under the setting of **NOT** adopting classifier-free guidance. Under the same setting, DiT only achieves 9.62 FID score while Patch Diffusion achieves 7.65 FID score, demonstrating the effectiveness of our proposed method. If applying classifier-free guidance to Patch Diffusion, our model could reach 2.74 FID score on ImageNet-1K, which matches the state-of-the-art performance but costs a significantly lower computational cost (27 Gflops v.s. 118.6 Gflops).
> Weakness 3: It is worth exploring whether applying patch-wise training to DiT and U-ViT backbones could yield better results.
Our current Patch Diffusion is built and designed for U-Net based diffusion models. We agree with Reviewer NLEo that applying patch diffusion training onto transformer based diffusion models is also worth further investigation, but that currently is out of the scope of this paper, and we left that for future study.
> Q2: incorporating LoRA for fine-tuning
We appreciate Reviewer NLEo for recognizing the potential applicability of our method in finetuning scenarios. We do agree that our patch diffusion training approach harmonizes effectively with LoRA-type methods, thereby potentially enhancing finetuning efficiency. While LoRA is not the primary focus of this paper, we are open to the prospect of integrating LoRA in future investigations.
> Q3: variable-size images?
Any-resolution GAN [5] is primarily tailored for mixed-resolution datasets from images in the wild, whereas our work focuses on improving the training efficiency of diffusion models. In terms of generating variable-size images, we provided the extrapolation results of Patch Diffusion in the pdf of Response to All.
> Q4: Moving Appendix to the main manuscript.
We would love to include theoretical interpretation in our main paper. We will consider adding a summary of the theoretical interpretation based on the page limit.
> Limitations discussion is missing.
We discussed limitations and potential future work in Section 5, and potential negative social impacts in Appendix B.1.
---
Rebuttal Comment 1.1:
Comment: We would greatly appreciate it if you could review our response by August 21st. After that date, it might be challenging for us to engage in further discussions. If you have any follow-up questions, please don't hesitate to reach out. We deeply value your expertise and time. | Summary: This paper presents a new training technique that improves the training speed of diffusion models. Instead of training the diffusion model on the entire image, the authors propose training on sampled patches of the image. This approach maintains the theoretical foundation of the diffusion model by keeping the training objective function mostly unchanged. By combining this approach with a fully-convolutional U-Net architecture, the computational complexity is reduced, resulting in faster training. To minimize the quality difference between the partial image approach and the conventional whole image approach, a stochastic/progressive patch size scheduling was proposed, and the corresponding ablation study was conducted. This study investigates the optimal probability of using the whole image, considering the trade-off between training time and generation quality. Summarizing, the proposed method enables faster learning while preserving the theoretical foundation and generation quality of traditional diffusion models.
Strengths: This paper presents two significant advantages of training diffusion models using partial images instead of the entire images. Firstly, it reduces model complexity, leading to faster training, which is especially beneficial for state-of-the-art baseline diffusion models that require extensive GPU hours for training. This approach offers the potential for energy-efficient training by effectively reducing the overall training time.
Secondly, training with partial images proves effective in scenarios with limited datasets, outperforming traditional methods. In cases where the dataset is insufficient, diffusion models struggle to accurately predict the true data distribution due to overfitting (limited data is essentially a sparse sampling of true data distribution). By partitioning the image into patch units, the training process simulates a larger dataset, providing the model with more training samples. This enables the diffusion model to estimate a more accurate data distribution, resulting in improved generation quality.
The paper conducted a series of experiments encompassing large-scale datasets, limited-size datasets, and fine-tuning scenarios, to demonstrate the effectiveness of training images in patch units. This approach maintains performance while significantly improving training efficiency.
Weaknesses: The key idea of this paper is to modify the input data format while maintaining the training process of the diffusion model. This approach aligns with similar strategies proposed in previous works like COCO-GAN. Considering the large overlap in ideas, the authors should present various case studies, such as showcasing the application of this technique to diffusion models, in order to compensate for the limited novelty of the paper.
Firstly, there is a lack of analysis concerning spatial conditions. The authors convert the traditional three-channel format (R, G, B) to a five-channel format (R, G, B, i, j) incorporating location information. Meanwhile, other methods such as positional encoding in modern Transformer structures or Fourier feature methods in Alias-Free GAN have been proposed for spatial conditioning. Multiple experiments in various literature have demonstrated that positional encoding is more effective than raw methods like (i, j). Conducting an ablation study on different spatial conditioning methods and providing an analysis of the most suitable conditioning approach in a diffusion setting would strengthen the paper's credibility.
Secondly, there is insufficient analysis regarding the ability of patch diffusion to generate structural diversity. The experimental results in Table 1 and Table 2 show a notable improvement in the quality of patch diffusion for datasets with weak common structures (e.g., Bedroom, Church) compared to those with strong common structures (e.g., FFHQ, CelebA). Including an analysis of the factors contributing to this quality improvement, such as visualizing attention layers for patch images, would help readers' understanding of the method.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Patch-wise generation:
Is it feasible to generate the entire image by combining generated parts instead of generating it all at once?
2. Patch diffusion for image extrapolation:
What would happen if channels for (i, j) were provided with a range of (-1.2, 1.2)? Can patch diffusion extrapolate the image using this approach?
3. Figure quality:
To enhance the readability and visual appeal of Figure 1, I recommend refining its quality. Currently, the overall figure appears hastily created, resembling a PowerPoint slide. I kindly request aligning each object for the camera-ready submission to improve its structure. The objects currently occupy space without significant value. Also, the range of i, j is -1~1, but the crop has a different scale, such as 16x16. Please provide an example of the actual value when a 16x16 crop is performed. Furthermore, the font size in the figure is very small, making it difficult to read. Considering the available white space, increasing the font size throughout would greatly improve legibility.
4. Typo:
Line 86: "e.g.,." should be corrected to "e.g."
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Vqdw for providing the positive feedback and constructive suggestions. We address your questions and provide more clarifications below.
> Firstly, there is a lack of analysis concerning spatial conditions.
Thanks for pointing this out. We agree proper positional embedding could potentially improve the current Patch Diffusion Training. We have tried Fourier Positional Encoding (used in Neural Radiance Fields [1]) as an alternative way to replace the raw coordinates inputs, but we did not see obvious improvement in the preliminary experiment. We will try more in the future (as we mentioned in the future work).
[1] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106
> Secondly, there is insufficient analysis regarding the ability of patch diffusion to generate structural diversity.
We agree with the observation that Patch Diffusion training helps more for datasets with weak common structures, such as Bedroom, Church, and ImageNet (with classifier free guidance, now we reach FID **2.74**). We hypothesize that training on local patches could help to capture more local details in the image and then improve the general generation quality.
> Question 1
It is feasible to do generation by parts. The results will depend on the generation scheme. For example, for the CelebA dataset, if you generate the image with four non-overlapping parts (left-top, right-top, left-bottom, right-bottom), then the model will generate a face with each part independently generated. If the entire coordinate manifold is the input, then the entire image is generated and is consistent.
> Question 2
Thanks for pointing this out. We have provided the extrapolation results in the pdf of **Response to All**.
> Question 3
We thank you for pointing out suggestions in refining Figure 1. We will follow your suggestions and modify it in our next revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response.
For the weakness 1, providing sFID may give some answer. Have you tried to measure sFID? Other diffusion model literatures usually report FID, sFID, precision and recall. Could you provide all other metric scores?
For the question 1, stitching multiple images into one can cause seam in the images. Do the Patch Diffusion also suffers from seam?
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer Vqdw for reviewing the response and providing feedback. We answer your two remaining questions below.
- In our initial experiment, we assessed the FID on CelebA-64x64 both with and without the NeRF positional embedding. The FID values were 1.81 for Patch Diffusion with NeRF and 1.77 for Patch Diffusion using current coordinates. A comprehensive discussion on NeRF and other metrics will be presented in our subsequent revision.
- As previously noted, the generative outcome is contingent upon the input coordinates. If four non-overlapping coordinates are given, the model will independently generate four non-overlapping patches accordingly. A naive combination of these patches will result in seam discrepancies. This outcome is anticipated, given that the input comprises only non-overlapping coordinate sets, preventing the model from achieving coherence in unprovided segments. However, when provided with a complete coordinate manifold, the model can render a unified and coherent image. Patch diffusion can potentially be combined with dedicated methods developed for stitching different region generations, such as overlapping certain regions of different generations [1].
[1] Bar-Tal, Omer, et al. "Multidiffusion: Fusing diffusion paths for controlled image generation." (ICML 2023). | Summary: The paper introduces a path-wise diffusion algorithm for faster training. The authors propose patch coordinate conditioned diffusion models and present a patch-size conditioning scheduling technique for efficient training. The method has similar motivation with patch based GAN such as COCOGAN, but it is applied to diffusion model.
Strengths: The method presented in the paper is simple yet effective, successfully reducing the training time by half. The motivation behind the approach aligns with that of COCOGAN, and it can be seen as a reasonable extension for the diffusion model to reduce computational requirements.
Weaknesses: To learn global structure, the algorithm still need a portion of full-resolution diffusion which introduces a bottleneck in terms of time and memory costs. Additionally, the patch-size scheduling approach appears to be manually designed, potentially limiting its adaptability and automation in optimizing the training process. More controlled experiments on various positional encoding or different hyperparameters would provide better intuition about the work.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. Is it possible to extrapolate an images as shown in COCOGAN?
2. What is FID scores of baseline model with same train-time without patch training? (e.g. 24h FIDs of EMM-DDPM++ for CelebA)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The author provides some limitations on conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ExTh for providing positive feedback. We address your questions and provide more clarifications below.
> Weakness
The patch-size scheduling could be flexibly set as other values. Different scheduling will induce different levels of gain in training efficiency. The setting shown in the paper is not cherry-picked and is just what we used. We agree incorporating more advanced positional embedding is worth further investigation. We have tried Fourier Positional Encoding (used in Neural Radiance Fields [1]) as a simplest way to replace the raw coordinates inputs, but we did not see obvious improvement in the preliminary experiment. We will try more in the future (as we mentioned in the future work).
[1] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106
> Is it possible to extrapolate an images as shown in COCOGAN?
Thanks for pointing this out. We have provided the extrapolation results in the pdf of **Response to All**.
> What is FID scores of baseline model with same train-time without patch training? (e.g. 24h FIDs of EMM-DDPM++ for CelebA)
Our training logs show that when the EDM baseline method is trained with ~24h, it produces 1.85 FID score on CelebA and 3.36 FID score on FFHQ, respectively.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the author response. It seems the model can extrapolate well. After reading the response, I keep my original rate.
---
Reply to Comment 1.1.1:
Comment: We thank you for your consideration and affirmation after reviewing our response; we genuinely appreciate it. | Summary: The authors propose a new formulation of training diffusion models by sampling different-sized patches from the training data. The models trained with this formulation have comparable FID scores to models trained on full images on many datasets.
Strengths: The authors proposed a way to train faster diffusion models, which can cut down training time in half with almost similar performance.
Well-written and to the point paper.
Weaknesses: Training with patches of image with same guidance might not work when the training data is more heterogeneous like LAION. In datasets where the subject might not be centered in image, or occupy small portion of the image or existence of multiple object in the scene, I wonder if model can still perform well.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Do you have any results on models trained on LAION or MS COCO?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer qnyP for providing positive feedback and valuable suggestions. Currently, the pretraining scope of our experiment is limited to class conditional experiments due to the limit of computational resources. Thanks for pointing out interesting and promising potentials. We will consider training on text-guided image generation datasets such as LAION or MS-COCO in the future.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the response. I stick to my current score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the time you've taken to review our work and provide feedback. We are committed to continuous improvement and value your insights. | Rebuttal 1:
Rebuttal: # Response to All
We'd like to thank all five reviewers for their insightful comments and suggestions. We hereby provide the image extrapolation results that Reviewer ExTh, Vqdw and NELo are interested in, and the state-of-the-art ImageNet-1K(256x256) FID **2.74** for Patch Latent Diffusion with the use of classifier-free guidance. We will incorporate these new results and improvements into the camera ready version of the paper.
Pdf: /pdf/6ab81224e783029ef34db3c0fe948209de151315.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a novel training framework for diffusion models, that significantly reduces the training time, while improving data efficiency. For the first time, the proposed method suggests patch-wise diffusion training, which can be deployed to any UNet-based diffusion models. Experimental results show that the patch-wise diffusion training can halve the training time while maintaining comparable or better image quality than the baseline models. On small scale datasets, the proposed method outperformed other baselines, validating the data efficiency of path-wise training.
Strengths: - The paper is well organized and easy to follow. The motivation of the paper is very clear, which is to shorten the training time of diffusion models while maintaining image generation quality.
- Proper ablation studies are delivered to fully validate roles of different components of the model, and affect of different parameters.
- 2x speedup of training time is non-negligible, considering long training time of conventional diffusion models.
- The data efficiency followed by patch-wise training framework is considered a significant discovery.
- The proposed method can be applied to any UNet-based diffusion models in a plug and play method.
Weaknesses: - Albeit the empirical evidences, the theoretical proof of convergence of patch-wise score matching is missing.
- Experiments on high resolution image synthesis, beyond 256x256 resolution is missing. According to Table1 and 2, compared to the baseline method, the proposed patch diffusion showed better performance boost on the larger scale datasets (LSUN-Bedroom&Church), than the smaller scale datasets (CelebaA and FFHQ). Thus, it might imply that the patch diffusion can be more beneficial in high resolution image synthesis scenarios. Therefore, validation on high resolution image datasets, beyond 256x256 resolution could be interesting.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - While training, when entering the denoiser (UNet), has the small patches resized into the original image size? If not, will there be a difference between using resizing and not?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer nnk4 for your positive and valuable feedback. We appreciate the time and effort you've taken to review our work. We have carefully considered your comments and suggestions, and we would like to respond to each of them.
> Albeit the empirical evidence, the theoretical proof of convergence of patch-wise score matching is missing.
Thanks for pointing this out. We mentioned in Section 5 that providing theoretical proof of convergence for patch-wise score matching is a potential future work. Currently, we provide a theoretical interpretation for Patch Diffusion in Appendix A.
> Experiments on high resolution image synthesis, beyond 256x256 resolution is missing.
Thanks for the suggestion! Training on high-resolution images is still computationally expensive for us due to our limited computational resources. We will consider training on 512x512 images in the future. We also provide image extrapolation results in the reponse to all, which shows that our model could generate larger size images, such as 384x384 and 512x512 images, while only trained on 256x256 images.
> While training, when entering the denoiser (UNet), has the small patches resized into the original image size? If not, will there be a difference between using resizing and not?
No, the small patches are not resized into the original image sizes when entering the UNet. Using small patches for training could significantly improve the training efficiency. | null | null | null | null | null | null |
Saddle-to-Saddle Dynamics in Diagonal Linear Networks | Accept (spotlight) | Summary: This paper characterizes the trajectory of gradient flow on 2-layer diagonal linear networks for linear regression tasks. Specifically, the paper considers the model parameterized as $x \mapsto \langle u \odot w, x \rangle$ in the linear regression setting. By interpreting the gradient flow on the nonconvex loss as a mirror flow, the authors show, in the limit of vanishing initialization, that the gradient flow dynamics jump between various saddles before converging to the minimum $\ell_1$ norm solution. The paper characterizes the exact times of these jumps (after appropriate rescaling based on the initialization size $\alpha$ in the $\alpha \rightarrow 0$ limit), as well as the location of these saddles; these times and locations can be computed via algorithm 1. As a consequence, the paper shows that each saddle is a solution to a constrained minimization problem where some subset of coordinates are fixed 0. Under a RIP assumption on the data, the paper shows that new coordinates are added sequentially at each saddle, thus demonstrating incremental learning.
Strengths: - The paper appears to be technically sound, and is well-written and easy to understand.
- The quadratically parameterized regression problem has been well studied in prior works (Woodworth et al. 2020, Azulay et al. 2021, and others), which show that the gradient flow can be characterized by mirror descent and converges to the minimum $\ell_1$ solution in the $\alpha \rightarrow 0$ limit. However, the current paper exactly characterizes the limit of the gradient flow trajectory under minimal data assumptions, which is a novel analysis. The main technical novelty here is proving that as $\alpha \rightarrow 0$ the trajectory converges to a piecewise constant process.
- While many prior works on implicit regularization only focus on properties of the solution at convergence, the current paper can describe the saddle points along the trajectory. I find it to be a significant contribution that the successive saddles which can be visited by gradient flow can be characterized (by Algorithm 1) explicitly. In particular, the observation that coordinates can be deactivated has not appeared in prior work and is quite interesting.
- Finally, I find this paper to be of moderate significance, as the quadratically parameterized linear regression setting is a common toy problem to understand the implicit regularization effect of gradient descent more generally.
Weaknesses: There are a couple (minor) weaknesses of the current theory.
- The analysis is limited to the quadratically parameterized regression setting, which while despite possessing rich implicit regularization behavior is still far from more practical settings in which implicit regularization occurs.
- One weakness of the current paper is that the jump times and saddle locations are only implicitly defined via Algorithm 1. The paper does show that under an RIP assumption that the jump times are $1/\beta^*_s$ and the saddles correspond to incrementally learning more coordinates. However, it is difficult to interpret the intermediate iterates of Algorithm 1 more generally, which reduces the impact of this paper.
- Another weakness is that the entirety of the analysis is done in the $\alpha \rightarrow 0$ limit, rather than at some small (but finite) initialization $\alpha$.
Minor typos:
- line 133 “diferred” → “deferred”
- line 318 “independant” → independent”
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - In general, what can one say about successive saddles? It would be interesting to understand more fine grained properties about the sequence of saddles, such as how many coordinates can change at each step and under what conditions coordinates can get deactivated. I find it unlikely that Algorithm 1 would loop through all possible subsets of activated coordinates, and instead find it more likely that coordinates would be deactivated less frequently. I thus think the paper would benefit from additional investigation (either theoretically or empirically) into how the set of activated coordinates changes between saddles.
- What can be said in the case when $\alpha$ is small but isn’t taken to 0? Can one say something quantitatively about whether saddle-to-saddle dynamics occur, or the rate at which the trajectory limits to the piecewise constant trajectory $\tilde \beta^\circ$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations/Broader Impact are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the feedback.
We answer to your questions below and answer to your comment on the restrictive setting in the *official comment* section since it was made by nearly all the reviewers.
**Finite initialisation**: unfortunately, as we acknowledge line 267, our results do not provide a rate of convergence in $\alpha$ and therefore cannot explain the observed stepwise trajectories for a non-zero initialization. We believe that a non-asymptotic result is currently challenging, but we believe that our work is nonetheless a significant contribution and that the tools and techniques we provide could eventually lead to a non-asymptotic result.
**Additional results concerning Algorithm 1**: we would first start by highlighting that the main goal of our work is to prove that the limiting dynamics of GF for vanishing initalisation is a saddle-to-saddle dynamics which is fully described by Algorithm 1. From there, as rightfully mentioned by the reviewer, many very interesting questions can be raised concerning the behaviour of this algorithm. We already provide a few:
- upper bound on the number of iterations
- proof that the coordinates of the iterates $\beta_k$ have at most $n$ non-zero coordinates (see prop 6 in the appendix)
- analysis of the full behaviour under a RIP assumption
Moreover, we believe that further results are out of the scope of our paper. Indeed, to draw a comparison, we are not aware of results for the Homotopy method which go beyond results under RIP assumptions apart from [a] which provides a worst-case lower-bound on the number of iterations. However, note that this result is from 2012, therefore more than 10 years after that the algorithm was initially introduced in [b].
[a] Mairal and Yu, Complexity Analysis of the Lasso Regularization Path, ICML 2012
[b] Osborne et al. A new approach to variable selection in least squares problems, 2000.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you to the authors for responding to my questions. I agree with the authors that the current results are indeed novel and relevant, and that the additional results I asked about are beyond the scope of the current paper (though still interesting!). I have increased my score from a 6 to a 7. | Summary: This work concerns the behavior of gradient flow of 2-layer diagonal linear neural networks when the initialization scale goes to zero. The authors show that this limiting behavior is governed by a piecewise constant trajectory consisting of jumps from saddle to saddle.
Strengths: The incremental learning and saddle-to-saddle phenomenon is interesting. Although the model considered in this paper is equivalent to linear models, the dynamics that the authors uncover is nonetheless nontrivial and of broader interest. The writing quality is excellent and the technical material is presented in an intuitive to understand way.
Weaknesses: Is the final point $\beta_p$ in Theorem 2 the minimum $\ell_1$ solution to $L$? Theorem 2 does not explicitly state it.
Much of the later analysis is regarding the $\beta$'s. But the model is stated in terms of the $u$'s and $v$'s. Can the authors comment on the behavior of the $u$'s and $v$'s? (Or at least point me to where this was discussed, in case I missed it).
line 40 "diagonal linear networks which are simplified neural networks that have received significant attention lately...". Could the authors provide a slightly expanded discussion of what these prior work did? Since this paper is specifically about diagonal linear networks, this part of the related work I believe is especially important.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: What is the significance of the connection to the Homotopy algorithm? I'm not familiar with it and would appreciate if the authors can explain a bit more why this connection matters.
If the authors can address this and the weaknesses above, I'll be happy to raise the score to a 7.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: I do not believe the authors addressed the limitations, at least not in an explicit way. I think the paper would be greatly improved if it can touch upon the comment brought up on line 78 that the activation function is the identity. The authors make it very clear that the work is about linear networks, as many other prior works have done. I feel like if the authors can provide some form of experimental results/explanation of whether this phenomenon appear (or does not appear) where $\sigma$ is nonlinear would be greatly beneficial. That said, I think the paper's technical contribution outweighs this.
If the authors can address this and all of the above, I'll be happy to raise the score to an 8.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the extensive and valuable feedback. We answer to your remarks and questions below.
**Final point $\beta_p$**: the final point $\beta_p$ is indeed the minimum $\ell_1$ norm solution $\beta^*_{\ell_1}$, we tried to make this clear by pointing out that $\beta_p = \beta^*_{\ell_1}$ in Theorem 2 line 253. We propose to emphasize this point if the reviewer feels it is necessary.
**Neuron point of view**: we indeed did not put much emphasis on the neurons $(u,v)$. However as explained lines 263-265, there exists a bijection (given in Lemma 1) between the neurons and the vectors $\beta_t$. Therefore we immediately get that the neurons $(u^\alpha_t, v^\alpha_t)$ converge towards $(\sqrt{\vert \tilde{\beta}_t^\circ \vert}, \text{sign}(\tilde{\beta}_t^\circ) \sqrt{\vert \tilde{\beta}^\circ_t \vert})$. We will make this point clearer in the revised version.
**Previous works on DLNs**: by lack of space we indeed unfortunately did not provide an extensive discussion on previous works on DLNs. This omission is mostly because the vast majority of these works investigate the properties of the solution recovered by gradient flow / GD / SGD, without looking at the trajectory. The two most relevant references are [4] and [38] and are discussed at the beginning of section 2.2:
- [4] shows that the iterates $\beta_t^\alpha$ follow a mirror flow with potential $\phi_\alpha$
- [38] shows that the recovered solution $\beta_\infty^\alpha$ minimises the potential $\phi_\alpha$
Concerning other works on diagonal linear networks, we have:
- [33] investigates the role of SGD’s noise on the recovered solution
- [18] investigates the effect of label noise on the recovered solution
- [13] and [c] investigate the effect of the stepsize on (S)GD's recovered solution
- [37] investigates the statistical properties of the solution recovered by GD under a RIP assumption
We will add the discussion concerning these results in the related work.
**Link with the Homotopy algorithm**: the connection between Algorithm 1 and the Homotopy algorithm **is not important in itself but should be seen as a help to understand Alg 1**.
We mention this connection for two reasons: *(1)* the main one is to make Algorithm 1 feel less opaque and "out of the blue" by linking it to an old and well-established algorithm, *(2)* the construction of Algorithm 1 as given section 3.1 follows very similar lines as that of the Homopy algorithm.
In this way, making a link with the Homotopy algorithm is a way (for readers who are acquainted to it) of providing a familiar algorithm which makes our results more natural. **However, we hope that our intuitive construction of Alg 1 in Section 3 is clear enough for the algorithm to be understandable to people who are not familiar with the Homotopy algorithm.**
**Non-linear activation**: We would like to emphasis that **saddle-to-saddle dynamics also occur even when there is a non-linear activation**. To support this claim, we refer to [22] which "give evidence for the hypothesis that, as iterations progress, SGD learns functions of increasing complexity " for non-linear deep-networks and that there is a " separation in phases for learning". Furthermore, in [8], a 2-layer ReLU network is considered and you can observe a clear saddle-to-saddle dynamics in Figure 3 (for orthogonal inputs) and also in Figure 4 (f) (general inputs). We can also mention more recent works which consider non-linear networks and where clear saddle-to-saddle dynamics are observed: see Figure 2 in [a] and Figure 5 in [b]. We hope that these references will convince the reviewer that a non-linear activation function **does not** prevent the occurrence of saddle-to-saddle phenomenons. We will make this clearer in the revised version.
[a] Simon et al., On the Stepwise Nature of Self-Supervised Learning, arXiv 2023
[b] Oswald et al., Transformers Learn In-Context by Gradient Descent, arXiv 2023
[c] Nacson et al., Implicit Bias of the Step Size in Linear Diagonal Neural Networks, ICML 2022
---
Rebuttal Comment 1.1:
Comment: I thank the authors for a thorough reply. I have raised my score to an 8. | Summary: This paper analyzes a two-layer diagonal linear network, which is a linear regression model where the linear weight is parameterized as the point-wise product of two weight vectors. It is shown that, with vanishing initialization, gradient flow will jump between saddle points of the training loss and eventually reach the $\ell_1$-minimum-norm solution. This paper further provides an algorithm to calculate all the saddle points.
Strengths: It is very important to understand the training dynamics of neural networks, and this paper analyzes a novel and interesting saddle-to-saddle phenomenon. Moreover, Algorithm 1 in this paper can compute all the jump times and saddle points, giving a complete characterization of the gradient flow trajectory. The writing is also clean, with illustrative examples and graphs.
Weaknesses: The main weakness is that the two-layer diagonal linear network model (specifically, the point-wise product of two weight vectors) is too simple. If there is some evidence that some of form of the saddle-to-saddle phenomenon also happens in practical networks, this paper will be more convincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your feedback.
We answer to your questions below and answer to your comment on the restrictive setting in the *official comment* section since it was made by nearly all the reviewers.
**Evidence for practical networks**: we would like to re-emphasize that saddle-to-saddle dynamics **have been observed in practice in various cases** and that this preponderance is precisely the motivation behind our work (see lines 25 to 32 in our introduction). For instance, in [22], the authors empirically *give evidence for the hypothesis that, as iterations progress, SGD learns functions of increasing complexity* when training deep neural networks. For further references highlighting the incremental nature of learning, you can see characteristic stepwise learning curves in Figures 1 and 2 from [a] or Figure 2 and 3 from [b]. Also note that these observations in practical networks have led to a large amount of works trying to theoretically prove such dynamics in various settings: matrix factorisation, linear networks etc. (see lines 33 to 41 in our introduction). However, our work is the first to theoretically provide a complete picture describing this phenomenon without imposing restrictive assumptions on the design matrix.
[a] Simon et al., On the Stepwise Nature of Self-Supervised Learning, arXiv 2023
[b] Oswald et al, Transformers Learn In-Context by Gradient Descent, arXiv 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I will keep my score. | Summary: In this paper, the authors studied the training dynamics of gradient flow that minimizes mean-square loss with 2-layer diagonal linear networks and data in general position. Under the small initialization (initialization scale goes to 0), the authors showed that the limiting dynamics follows a saddle-to-saddle dynamics (jump from one saddle point to another saddle point) until reach the min-$\ell_1$-norm interpolator. They gave an algorithm that can compute the visited saddle points. The results generalized the previous works on incremental learning. Experiments are also provided to verify the results.
Strengths: 1. The paper is clearly written and easy-to-follow. The proof sketch and example are given to help the readers to understand the proof easier.
2. Understanding the training dynamics of neural networks/nonlinear models is an interesting and important problem.
3. This paper gives a precise characterization of the training dynamics that jumps between saddle points. This recovers and goes beyond some of the previous works on the incremental learning for linear diagonal networks.
4. The technique of reparametrized time to “accelerate” time seems to be interesting and might be of independent interest.
Weaknesses: 1. The current paper focuses on the linear diagonal linear networks. It would be interesting to see if such analysis could be generalized to other more complicated problems.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. In Theorem 2, it seems that the final solution is min-$\ell_1$-norm interpolator, so I think that means we are implicitly assuming the input dimension $d$ is at least the number of samples $n$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitation is discussed in the paper. This is a theoretical work and therefore does not seem to have negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the valuable feedback.
We answer to your comment on the restrictive setting in the *official comment* section since it was made by nearly all the reviewers.
**Dimension and number of samples**: your comment on the input dimension $d$ having to be larger than the number of samples $n$ is due to our imprecise definition of $\beta^*_{\ell_1}$ in the paper, we make it clearer here and will make it clearer in the revised version. The definition of $\beta^*_{\ell_1}$ which we give line 116 as $\beta^*_{\ell_1} = \arg \min_{X \beta = y} \Vert \beta \Vert_1$ indeed **only makes sense when $d > n$**. However, when $d \leq n$, all our results still hold by simply letting $\beta^*_{\ell_1}$ be the unique minimiser of the loss (which trivially is still the minimum l1-norm solution as it is then the only solution!). The general and correct (but a bit more heavy) way of defining $\beta^*_{\ell_1}$ is as :
$\beta^*_{\ell_1} \coloneqq \arg \min_{ \beta \in \arg \min L} \Vert \beta \Vert_1$. This definition then holds for any $(n, d)$ and all our results still hold. Thank you for pointing out this imprecision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response to address my question. I will keep my score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the time they spent reviewing our paper and for the valuable feedback. An overall comment made by all reviewers (except reviewer P2pV) is that the considered setting (2-layer diagonal linear network) is too restrictive. We naturally agree that our setting is very far from practical networks. However, we want to emphasize that despite its apparent simplicity, the loss function is non-convex and very rich behaviors already occur, as illustrated in the paper.
This saddle-to-saddle behavior (also called incremental learning or feature learning) has been observed in a variety of more complicated settings as motivated in the introduction. We take advantage of this rebuttal to add two recent works which highlight the prominence of such dynamics in current deep learning settings: see Figure 2 in [a] where several network architectures are considered as well as Figures 2, 3, 5 in [b] where various transformers are studied.
As such, we consider our setting as being an ideal proxy model for gaining a deeper understanding of saddle-to-saddle dynamics. We would also like to highlight that despite the apparent simplicity of the model, the analysis is already rather involved and a non-trivial algorithm emerges. Furthermore, we believe that the tools we leverage in our work could be very useful for future works studying similar dynamics but in different settings. Finally, we want to point out that the solution recovered by gradient flow with vanishing initialization is still not fully understood for more complex frameworks such as matrix multiplication or 2-layer ReLU networks. Hence we believe that studying the full trajectory and explaining the observed saddle-to-saddle dynamics in such general settings is currently out of reach.
In view of all this, we hope that the reviewers will acknowledge that though our framework may seem deceptively simple, the analysis is already quite intricate and our work contributes to the understanding of an important topic as well as provides useful tools for future research.
[a] Simon et al., On the Stepwise Nature of Self-Supervised Learning, arXiv 2023
[b] Oswald et al, Transformers Learn In-Context by Gradient Descent, arXiv 2023 | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies the saddle-to-saddle dynamics in Diagonal Linear Networks.
The authors present solid theoretical understanding.
They show that over 2-layer diagonal linear network, gradient flow starting with vanishing initialization visits then jump from saddles jumps from a saddle of the training loss to another until reaching the minimum $\ell_1$-norm solution.
Strengths: The paper is clear and well-written.
Understanding the training dynamics of gradient descent over neural networks is a significant theoretical issue.
Particularly, the phenomenon of saddle-to-saddle during neural network training remains mysterious, and this paper makes a valuable contribution by offering a solid theoretical analysis for 2-layer diagonal linear networks.
The proof presented in the paper incorporates innovative techniques, including mirror flow and time-reparametrization.
Weaknesses: The article's significance may be constrained by the focus on 2-layer diagonal linear networks, which have limited representation abilities and are no better than linear models.
However, this limitation is not a major concern since the problem itself is non-convex, even in this simplified scenario.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. I would like to know how much the vanishing initialization would affect the saddle-to-saddle phenomenon. Will saddle-to-saddle dynamics also occur if a practical initialization is used?
2. As shown in the theory and Figure 2, the recovery of coordinates is sequential, which seems to be similar to the Coordinate Descent algorithm. In this setting of 2-layer diagonal linear networks, are GD and CD inherent related?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The analysis in this study focus on 2-layer diagonal linear networks, which have limited representation abilities and exhibit some special properties such as Prop 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the valuable feedback.
We answer to your questions below and answer to your comment on the restrictive setting in the *official comment* section since it was made by nearly all the reviewers. We answer to both of your questions below.
**Practical initialisations and saddle-to-saddle**: Saddle-to-saddle type of dynamics (which can also be referred to as the feature learning regime in the deep learning community) have been empirically observed in various cases **with practical initialisations**. For instance, in [22], the authors empirically *"give evidence for the hypothesis that, as iterations progress, SGD learns functions of increasing complexity"* when training deep neural networks. They emphasize that these observations are for *"standard random initialisation"*. For further references highlighting the incremental nature of learning for practical initialization, you can observe characteristic stepwise learning curves in Figures 1 and 2 from [a] as well as Figures 2 and 3 from [b]. Therefore, saddle-to-saddle dynamics **do not** simply correspond to a pathological and uninteresting phenomenon which only occurs asymptotically. They gradually appear and then amplify when taking the initialization to zero. In our setting, you can observe the progressive sharpening of the curves in Figure 3 (left) in the appendix. A very interesting question for future work would be to theoretically exhibit the threshold $\alpha_0$ at which the saddle-to-saddle dynamics starts to appear. However this would require a non-asymptotic and different type of analysis compared to the one we propose in our work.
**Algorithm 1 and CD**: Algorithm 1 and Coordinate Descent share similarities as they both iteratively solve minimisation problems over a fixed set of coordinates. The two are nonetheless distinct algorithms, the major difference is that Algorithm 1 progressively identifies the support of the minimum l1 interpolator: the successive sets of minimized coordinates are appropriately chosen and are of overall increasing size. On the other hand, for CD, only **one coordinate at a time** is updated and this coordinate is chosen very differently to Alg 1: in many cases it is simply chosen randomly. A major consequence of these differences is that Alg 1 terminates in a finite number of iterations while CD only converges asymptotically.
[a] Simon et al., On the Stepwise Nature of Self-Supervised Learning, arXiv 2023
[b] Oswald et al, Transformers Learn In-Context by Gradient Descent, arXiv 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the reviewer's detailed response to my questions. | null | null | null | null | null | null |
Improving Compositional Generalization using Iterated Learning and Simplicial Embeddings | Accept (poster) | Summary: This paper proposes an iterated learning method with simplicial embeddings (SEM-IL) for systematic generalization.
The method is inspired by iterated learning for humans, encouraging compressibility and expressivity.
The empirical experiments show the improvement in vision tasks (known latent factors) and real molecular graph prediction tasks (unknown latent factors).
It better portrays the generation factors in vision tasks.
It also finds that the success of iterated learning may be caused by discrete messages.
Strengths: - The analysis and experiments show the potential of the SEM-IL framework on systematic generalization problems.
- It is based on the cognitive science of iterated learning for humans.
Weaknesses: (1) As mentioned in the conclusion section, it is still unclear why the SEM-IL method enables systematic generalization from a theoretical perspective for deep learning.
(2) More experiments will be helpful.
The paper would be more convincing with a strict theoretical explanation or comprehensive experiments.
(3) As mentioned in line 88, the decomposition into G and O is fundamentally unidentifiable because there can be different ways to define factors that work equally well on training data.
So it would be helpful to summarize what additional information SEM-IL provides to identify them.
(4) It seems source codes are not provided.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please address the points in the weakness section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are mentioned but not in a dedicated section.
It does not have a dedicated section for social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for pointing out the potential and shortcomings of our work. Please also refer to the overall response part for some common concerns.
> Q1: As mentioned in the conclusion section …
Our claim that “a theoretical understanding is still missing” might be too conservative. Under some assumptions, we do have theoretical guarantees of the behavior of neural agents involved in iterated learning. What we want to strengthen here is that the theoretical analysis for a real IL-SEM system is hard, as we don’t have enough theoretical tools to describe the behavior of a non-trivial network on a multi-label problem where the pseudo labels are sampled from some dynamic and unknown distributions (i.e., imitation phase is complex). Hence we try to highlight the intuition of the method and use carefully designed experiments to verify them.
For the theoretical guarantees, we can start from the analysis of Bayesian agents in Appendix B.2. If we assume the agents involved in IL update their belief of all possible $h$ in a Bayesian way, we can then theoretically guarantee that the peak of the posterior is $h^* → argmaxP_0(h | h\in H_{eff})$, where $H_{eff}$ contains all the hypotheses that satisfy the requirements of the interaction phase. The proof is not hard; we did not provide it in this version, since we think that might distract the readers from the experiments. In this toy 2*2 example, $H_{eff}$ contains all bijections. In real applications, $H_{eff}$ contains those $h$s who are capable of accomplishing downstream tasks. In short, $H_{eff}$ is the embody of expressivity pressure, while the compressibility pressure is incorporated in $P_0(h)$. We assume the prior of the mappings are negatively correlated to its coding length, as illustrated in Table 4. This is a big assumption, which motivates us to claim that “the theory for IL-SEM is still missing.” However, this assumption is likely to be true for many intelligent agents, as it is exactly what Occam’s razor claims (see also formalizations via algorithmic information theory). We then discuss where this pressure comes from in deep learning in Appendix B.3. In this part, we find for overparameterized models, the multi-generation self-distillation can bring the simplicity bias of “active bases” to the model. Although most of the theoretical analysis in this part comes from [54], we believe our application here to explain IL is also a theoretical contribution.
In summary, we can guarantee the behavior of Bayesian agents in iterated learning with the assumption of $P_0(h)$. Such an assumption could have different forms in deep learning, one of which can be proved in an overparameterized setting.
> Q2: More experiments will be helpful …
Yeah, we agree that more experiments will be helpful, but we'd also like to summarize what is shown by the experiments in the current version (please also refer to the response to Reviewer 7GSE-Q2):
1. A toy setting for the Bayesian agents is in Appendix B.2&3, which is helpful for understanding how IL works in an idealized case. We can directly observe the influence of the two pressures on the belief of different mappings.
2. Controlled vision tasks (section 4). With the generating factors, we can observe topsim, the learning dynamics of different pairs, and the influence of sampled pseudo labels. We cannot do such an analysis if ground truth G is inaccessible, hence we use settings where it is known; we’d like to highlight, however, that MPI3d-real in particular is a “real” problem based on actual images, not synthetic ones like dSprites (indeed quite simple) or 3dShapes (also simple, but requiring some real visual processing).
3. Diverse molecular graph tasks (section 5). The datasets we considered have different scales and a variety of targets. With the help of the structural features extracted by the RDKit tool, we can further verify that the representations learned via IL-SEM capture the structure of generating factors well.
The general response also gives some initial experimentation with GPT2, which we will add to the revised version.
> Q3: As mentioned in line 88 …
Indeed, the decomposition into G and O is fundamentally unidentifiable. However, we believe all the correct decompositions should capture the Hamming distance relationships among samples, which is determined by the ground truth generating mechanisms. For example, no matter how we decompose a specific G (e.g., the color could be {”blue”, “red”} or RGB value), the distance between “blue box” and “blue circle” should be smaller than that of “blue box” to “red circle”: the two objects in the first pair share some similarity we care about. If one decomposition cannot capture this distance relationship, it will not help with systematic generalization. Correct decompositions should be “isomorphic” (see further discussion in the response to r5b6, Q1b).
For the concern about “additional information”, our answer is that there is no additional information provided. The algorithm only **amplifies the simplicity bias** to distinguish mappings from stages 3 and 4. Recalling the toy 2*2 example from Appendix B, both systematic mappings and “holistic” mappings are bijections — mutual information between G and z cannot distinguish them. One separation, however, is the simplicity of the mapping; see the discussion in lines 941-951 for an example. For the origin of this simplicity bias in deep learning, please refer to the response to Q1, Appendix B.3, and the response to reviewer eFnA (Q5).
In summary, we don’t care much about the specific decomposition of G; it is not used in the algorithm. Rather, the compressibility pressure will guide our model to the most compressed mappings. Based on Occam’s razor, we believe these mappings are likely to capture the ground truth generating mechanisms and hence are able to generalize systematically.
> Q4: The code …
We have sent an anonymous link to the code to the AC, following the instructions.
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal. I raised the score. | Summary: This work addresses the challenge deep neural networks face with systematic generalization. Systematic generalization refers to the ability to apply learned concepts to new, unobserved combinations. The authors draw on a cognitive science theory known as "iterated learning", a hypothesized process explaining how human language developed this representation ability. The theory is based on simultaneous pressures towards compressibility and expressivity when transmitting messages. The researchers propose using iterated learning, in combination with deep network models that incorporate simplicial embeddings, to generate nearly discrete messages, thereby improving systematic generalization. The effectiveness of this approach is demonstrated in vision tasks with known latent factors and in molecular graph prediction tasks with unknown latent structures, where it outperforms other methods.
Strengths: 1. The paper is well-written. The motivation is clear and the problems are stated clearly.
2. The problem addressed in this work is important, and could be particularly interesting to many communities such as causal learning or domain generalization.
3. The empirical performance looks promising, where the gap between the baseline and the proposed approach is non-negligible.
Weaknesses: 1. The method section is not described clearly. In particular, in section 3.1, the authors did not explain why IL can be used to improve the performance. Similarly, in section 3.2, I cannot understand the motivation of adopting SEM. Most of the details are deferred to Appendix B, however, it is still important to briefly explain why IL and SEM can lead to the desired results.
2. The main experiments are done with synthetic dataset such as 3dShapes or dSprites dataset. It would be interesting to see how the proposed problem can address the distribution shift problem in domain generalization benchmark such as DomainBed.
3. The model attempts to learn the minimal feature to perform prediction. This could fail when there is “short-cut” presenting in the dataset. In particular, there might be “easy feature” that is highly correlated with the true target feature, where the model could learn to use those easy features instead in the proposed setting.
4. The main theoretical results in Appendix B.3 are adopted from previous work [56], where they consider a distillation setting.
5. The main motivation is based on compressibility and expressivity. The same concept is also proposed in various previous works such as Variational Information Bottleneck (VIM), supervised contrastive learning, invariant risk minimization (IRM), or works in causal representation learning. Nevertheless, there is no discussion or comparison to previous works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. There are three analysis provided in Appendix B to explain IL. It is a bit confusing why we need so many explanation here. In particular, it is hard to related different explanation, e.g., connecting the Bayesian analysis with the KRR analysis.
2. How do we guarantee the predictor g is a good classifier in IL? Do we also train g in the IL procedure?
3. Why can’t IL alone improve the performance? According to Appendix B, it seems that IL alone can address the proposed problem in representation learning.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is no obvious potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for pointing out the potential of our work. Please also refer to the overall response part.
> Q1: The method section …
Thanks for highlighting this issue; as mentioned in the general response Q1, we had trouble conveying these aspects within the space limitations, but in the final version we will emphasize the most important points of the argument. For what this will roughly look like, please see the response to reviewer eFnA (Q2 to Q4).
> Q2: The main experiments are …
First, we’d like to summarize what is shown by the experiments in the current version:
1. A toy setting for the Bayesian agents in Appendix B.2&3, which is helpful for understanding how iterated learning works in an idealized case. We can directly observe the influence of the two pressures on the belief of different mappings.
2. Controlled vision tasks (section 4). With the generating factors, we can observe topsim, the learning dynamics, and the influence of pseudo labels. We cannot do such an analysis if ground truth G is inaccessible; we’d like to highlight, however, that MPI3d-real in particular is a “real” problem based on actual images, not synthetic ones like dSprites or 3dShapes.
3. Diverse molecular graph tasks (section 5). The datasets we considered have different scales and a variety of targets. With the help of the structural features extracted by the RDKit tool, we can further verify that the representations learned via IL-SEM capture the structure of generating factors well.
The general response also gives some initial experimentation with GPT2.
For the distribution shift on G, we would expect some improvements, because the sys-gen is indeed a hard distribution shift problem: remember the support of P(G) of train and test sets are non-overlapping in our setting, which is a big shift.
However, we are not sure whether IL-SEM brings enhancement to domain adaptation problems as in DomainBed, because of their different settings. In particular, methods run on DomainBed typically know which examples come from which domain, and try to find some kind of invariance or similar properties across domains. That is, in our framework, the domain is (a) explicitly observed and (b) belongs in the “irrelevant” O factors, not in G. Our method, however, does not do much about O; it is designed for a different setting.
It may be possible to build approaches for DomainBed-type problems based on IL. For instance, we could perhaps begin with a general-purpose interaction task (as in SSL), treating all semantic features as G. We could then try to explicitly identify which factors of z correspond to the domain ID, and remove them. It also could be possible to directly incorporate domain adaptation approaches into the interaction phase. This seems beyond the scope of the present submission, however.
> Q3: The model attempts to learn …
Relying on the “short-cut” features is a long-standing problem in machine learning, which is unavoidable for any learning algorithm without extra information, including IL-SEM. In this paper, we assume there is no spurious correlation between factors in O and G. However, as in the response to Q2, most methods designed to alleviate the short-cut problem would be simple to add into the interaction phase. This version of IL-SEM would then still combine compressibility pressure from the imitation phase and ideally would gain the advantages of both frameworks.
> Q4: The main theoretical results …
The analysis of Appendix B.3 is indeed built off of [56], but we would like to highlight that in addition to bringing their theoretical analysis to this setting, we also give some theoretical insight via our definition of sys-gen, the analysis of the ladder of systematicity based on mutual information, and the Bayesian agent analysis. Please also see our discussion of the theoretical contributions in our response to Reviewer 7owJ (Q1).
> Q5: The main motivation is based on …
There might be a misunderstanding here: the term “compressibility” used in this paper is different from those used in IRM or VIM. In their setting, this word means compressing information contained in X into z, corresponding to moving from stage 1 to 3. Our use of “compressibility” measures whether a bijection can be compressed to a simpler function, i.e., stage 3→4. For causal representation learning, Figure 1 is similar to a causal diagram: we can consider all factors in G causally determine the labels Y. We will add some discussions in revision.
> Q6: There are three analyses provided …
Sorry for not clarifying that. The Bayesian analysis aims to propose a theoretical guarantee for the converging behavior of the agents involved in iterated learning. However, this theory is not perfect: one of the most important facts about compressibility pressure, the prior distribution $P_0(h)$, must be manually designed. Appendix B.3 justifies where this bias comes from when using gradient descent in deep learning without an explicit prior. The KRR in this part explains that in a simplified setting, the compressibility pressure can be considered as regularizing the “active basis” used to explain the training data, which is one form of compressibility.
> Q7: How do we guarantee the predictor …
Yes, $g$ will be trained together with $f$ in the interaction phase. We can treat the interaction phase here as a standard supervised learning process – the whole $g\circ f$ is differentiable thanks to SEM – with a special initialization of the backbone part. We don’t use $g$ in the imitation phase.
> Q8: Why can’t IL alone improve …
The main difference between IL-only and IL-SEM is the representation space. The discrete message will amplify the compressibility pressure for different reasons. First, it enables us to use the sampled pseudo labels from the teacher during imitation (discussed in section 4.2). Second, it enables us to use cross-entropy loss rather than MSE loss during imitation (discussed in Appendix B.3).
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the rebuttal and additional results. Nevertheless, I still share similar concerns mentioned by other reviewers. For Q4-6, the definitions and the theory are not completely coherent, and one needs to consider various frameworks, e.g., Bayesian and non-Bayesian theory to understand those stages. Q8 is also not being well-addressed, as the other reviewers also stated. Discrete feature has been well-adopted to improve performance, however, combining IL with it does not means IL is the key to improving performance. Overall, I think some presentations and clarifications could be improved, and I will keep my score.
---
Reply to Comment 1.1.1:
Title: Concerns about theory and Q8.
Comment: Thanks for the reviewer's feedback. We agree that the definitions and theories are not completely coherent, which weakens the paper's contribution. While we recognize that an integrated and self-contained theoretical framework would undoubtedly bolster the paper's strength, we must admit its complexity (and many widely used methods in deep learning need such theories as well). Nevertheless, we believe considering various frameworks is not necessarily a bad thing: some readers might favor describing the same phenomenon from different perspectives, which can make the claim more persuasive. So we put them in the appendix for readers who are interested in IL.
For the concerns about Q8, i.e., proving IL is the key to improving performance. We have ablation studies showing that IL+SEM outperforms IL-only, SEM-only, and baseline. When G is given, we also make a prediction on how the confidence of different message pairs evolves and verify that using experiments. We believe these observations are sufficient to claim that under the experimental setting, IL is the key to improving performance. We would appreciate it if the reviewer could give some suggestions (e.g., experimental designs) on how to prove the role of IL.
By the way, the original Q8 is asking why IL alone cannot make improvements, which is different from "why IL is the key to improving the performance". | Summary: This paper aims to develop methods to improve systematic generalization. The paper proposes several theoretical criteria related to representation learning to improve systematic generalization. Motivated by these principles, the paper then studies whether two methods, iterative learning (IL) and simplicial embeddings (SEM), and their combination can improve systematic generalization. They evaluate these methods on synthetic vision tasks and several tasks related to modeling molecules. They find that the combination of these two methods leads to the highest performance, on average, across these tasks.
Strengths: * I appreciated the formal treatment of the problem in section 2, the intuition offered in section 3, and the empirical analysis connecting the proposed methods to the various hypotheses in these sections. Otherwise it would have been non-obvious how SEM and IL would be expected to improve systematic generalization.
* The paper is well written and does a nice job connecting various bodies of work. While the empirical results for the specific proposed methods are not overly convincing (see weaknesses), I could see this work potentially inspiring new ideas.
* Different from prior work on systematic generalization that proposes specialized models with task-specific inductive biases, SEM and IL are relatively general methods that do not necessarily limit the expressiveness of the underlying model.
Weaknesses: * While I liked the intuition-building and "ladder of systematicity", I have two concerns with it. First, Stages 1 to 3 seem to directly follow from prior work, e.g. the information bottleneck principle (to the author's credit, this appears to be acknowledged in the appendix). Second, maybe misunderstanding on my part, but I was not fully satisfied with the definition of stage 4. As informal intuition, it makes sense that we would want to learn representations `z` that are similarly "structured" to the latent factors of the data generating procedure `G`. But it wasn't clear to me which operations are expected to be preserved through the proposed "isomorphism". Additionally, while `G` informally represents "semantic factors" it wasn't clear if this can be translated to a formal constraint on the structure of `G`, which we want `z` to be isomorphic to? I think it's fine to leave this mostly informal, but since the paper seemed to attempt to formalize this, I was left a bit unsatisfied.
* It seems like most of the intuition offered for SEM and IL corresponds to stages 1 to 3 of the ladder, which per the first point is closely aligned with criteria proposed by prior work. It wasn't clear why these methods should help achieve stage 4, which seemed more unique to the systematic generalization problem setting.
* The proposed methods seem to assume a `z` with fixed dimensionality. While this seems reasonable for one of the motivating examples (e.g. `z` representing color and shape), it is less clear how this could be applied to systematic generalization in natural language, which is referenced multiple times in the text. Prior work on systematic generalization in natural language has often considered dynamically-sized tree-structured latent variables.
* The overall effectiveness of SEM and IL is hard to gauge. The experiments show improvements on synthetic vision tasks and 3 tasks related to modeling molecules. However, it's unclear whether the proposed methods are effective on real-world vision or NLP tasks that are more popularly studied within the ML community.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See weaknesses above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: I more explicit discussion of limitations could be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for pointing out the potential of our work. Please also refer to the overall response.
> Q1a: While I liked …
The analysis of this part originates from the information bottleneck principle (we will make this clearer in revision), but there are also some distinctions. This principle is proposed to explain different stages of GD-based supervised learning, while our ladder applies it to rank the capability of the representations learned via different methods (e.g., supervised learning, SSL, etc.) The goal of this analysis is to propose stage 4.
> Q1b: Second, maybe misunderstanding …
For stage 3→4, we believe the toy 2*2 example mentioned in Appendix A.4 and B.2 can provide a good intuition. Compare sys-mappings to arbitrary (holistic) bijections. As a sys-mapping can be decomposed into some shared rules while a holistic one cannot (see Table 4), the former can generalize to unseen combinations, while the latter cannot. Such rule **decomposition** is the **isomorphism** mentioned by the reviewer: imagining we have 2 attributes, each with 4 possible values, a sys-mapping can be decomposed into 9 rules while a holistic one needs 16 rules. Based on Occam's razor (simplicity bias), the ground truth generating mechanism is likely to have only 9 rules.
Regarding the formal definition of this isomorphism, we do have one using the Wreath product in group theory (the detailed definition of Hypothesis 1 is its informal form). However, this definition requires the same $|G_i|$ for different i, and also different $G_i$ have to be independent of each other. Under this definition, we can formally prove that the isometry group of mappings is Hamming-distance preserving, which means the topological similarity between G and z of a sys-mapping should be highest among all possible groups (this can be proved using Corollary 4.2 of [Panek and Panek (2017)](https://arxiv.org/abs/1705.09987)).
In the end, we deleted this proof from the submission, for the following two reasons. First, we don’t want to make strong assumptions on $G$ which aren’t satisfied by real problems, e.g. in graph problems where $G$ has complex structures and dependencies, since these assumptions are not necessary for IL: compressibility pressure still exists when $G$ is complex. Second, although we have a formal definition of the problem in the simplified setting, we find our analysis doesn’t depend on that. We thought that including these formal results in a simple setting might distract the readers from the greater generality of our intuitive analysis. This setting is similar to that of [34] for disentanglement, which we believe has similar issues. We will restore these results in the appendix in revision if it would be helpful, however.
> Q2. It seems like …
There might be a misunderstanding here. As discussed in Q1b, we know the main difference between mappings of stages 3 and 4 is how well they can be compressed. But this compression does not describe compressing information in X to representation z: it describes _compressing the mapping G→z to fewer rules_, which brings us from stage 3 to stage 4. The term “compressibility” is probably the source of this misunderstanding; we will clarify this in revision. This compressibility pressure is assumed to be “built-in” in the human cognition system, and has different origins in deep learning. Please refer to Append B.3 for more details.
> Q3. The proposed method …
First, the fixed $z$ won’t harm the performance too much as long as the gap between the dimensions of $z$ and $G$ is not so big: some dimensions of $z$ can encode multiple features or noise. This can be verified by the graph experiments: we don’t know the ground truth $|G|$, but the performance doesn’t change much for a wide range of $|z|$ values.
What indeed matters is the structure of $G$ and $z$, which comes to the second point. For text input, which might have a more complex structure on $G$, we agree that yes, the SEM bottleneck may not be enough. The SEM structure simulates a $G$ without any hierarchical structure, simply different discrete factors. For inputs with more complexly-structured $G$, it might help to upgrade the bottleneck to a more complex (e.g., attention-based) structure, as well as measuring with more complex metric such as TRE of [Andreas (2019)](https://arxiv.org/abs/1902.07181) or HSIC [27] with more complex kernels. This adds substantial complexity, however, and since the current submission already has significant complexity and major concepts to explain we thought it better to defer this to future work.
> Q4. The overall …
IL has previously proven effective in many real applications, including multi-label image classification ([Rajeswar et al. 2022](https://arxiv.org/abs/2111.12172), machine translation [54], and VQA in [83]. However, these related works lack careful discussions on why IL works and how to apply IL to more general problems; they all rely on the inherent encoder-decoder structure. On the other hand, our submission first analyzes the key building blocks of IL. To achieve this, we chose synthetic vision datasets because knowing $G$ makes analysis much easier, and graph data because we can use RDKit to extract structural information to verify what was learned in $z$. Second, we show how to convert a general representation learning problem into an encoder-decoder style for IL, which we believe is crucial to enlarge the scope of IL. By the way, the datasets we considered, though controlled, are quite “real”: MPI3D is generated by taking pictures of a real mechanical arm, and PCQM analyzes millions of real molecules.
Regarding large-scale NLP tasks, many traditional solutions already have encoder-decoder structures and discrete messages, as in [54]. Directly applying IL would be fine. For the more advanced decoder-only structure, like GPT, we are still exploring the potential of IL (we tried a small example on GPT2; please see Q3 in the general response).
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: Thank you for your reply. I confirm my original score. | Summary: Iterated learning is hypothesized to help human language develop representation ability. The paper proposes to use iterated learning with deep network models containing simplicial embeddings to obtain approximately discrete messages. They show that this combination of changes improves systematic generalization over other approaches, demonstrating these improvements both on vision tasks with well-understood latent factors and on real molecular graph prediction tasks where the latent structure is unknown.
Strengths: The paper aims to improve the systematic generalization capability of deep neural networks, which is currently an important topic in the field. The writing and figures are clear and quite easy to follow.
Weaknesses: Please see in Questions.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. A more detailed explanation for iterated learning, which is currently lacking in the paper, will be appreciated.
2. The intuition of using iterated learning in this paper is merely from results in cognitive science. I have not seen convincing reasons why it is appropriate to incorporate iterated learning in deep neural networks, and what limitations of previous works can be tackled by doing so. The introduction should be rewritten to highlight these points.
3. As illustrated in Figure 3 and Table 2 (and the authors also notified), iterated learning alone may not be enough. This again raises a question: Is iterated learning really necessary for this framework?
4. I have not seen a clear connection between iterated learning and simplicial embedding. As far as I understand, the main theme of the paper is about iterated learning, and simplicial embedding empirically improves the performances further (as mentioned in Section 3.2). That is to say, the combination of iterated learning and simplicial embedding in this paper may come from trial and error rather than a thorough theoretical justification. It would be beneficial if the authors can highlight this connection in their paper.
5. The paper mentions compressibility and expressivity pressures throughout the main text. Can we quantify these pressures to show that the proposed method really helps them emerge through iterated learning?
I would happily increase my score if the authors carefully tackle my concerns.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have included some discussions regarding the limitations of their work in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s feedback and comments. Please also refer to the overall response for some common concerns, and here is the piece-to-piece response.
> Q1. A more detailed explanation …
Thanks. We will fix it in the revision.
> Q2. The intuition of using …
We can start from the ladder of systematicity. By analyzing the information among OGXYZ, we conclude that only focusing on the mutual information between G and z is not enough for sys-gen. In the colored-MNIST example (lines 100-111), there are many possible bijections (with maximal MI), but only sys-mappings allow for sys-gen to unseen combinations of factors. Compressibility pressure finds simpler mappings, e.g. those based on fewer “bases” as discussed in lines 941-951 in the example of Appendix B.3 or those with shorter description lengths as in Table 4. In general, this should find more systematic mappings, considering arguments based on Occam’s razor or e.g. Kolmogorov complexity. Note that the term "compressibility" here does NOT refer to compressing information from X into z, as is used to go from stage 1 to 3, but to learning rules which themselves can be compressed, i.e., from stage 3 to 4.
Where does compressibility pressure come from? Here are two independent lines of work that guide us to IL. One is from a line of work stemming from cognitive science but with some existing applications in deep learning: [44] claims the role of two pressures; [66] extends this to neural agents; [83] adapts this to a seemingly different application, i.e., VQA. Another line is multi-generation self-distillation, which is also known as Born Again Networks (BAN); these can be framed as IL without an interaction phase. As discussed in [56] and Appendix B.3, BAN can impose a strong regularization on the number of “active bases”, which is proven to be hard to achieve via other explicit regularizers. BAN alone lacks the expressivity pressure given by an interaction phase, though, as discussed in Appendix B.3.
> Q3. As illustrated in Figure 3 …
We certainly do not believe that iterated learning is the only conceivable way to achieve systematic generalization. The theoretical arguments of [56] establish that the way in which IL/BAN enforces compressibility pressure (via "active bases") is difficult to achieve through standard regularizers, however, giving some credence to the idea that some form of interaction is helpful. Their analysis is in a simplified setting compared to neural network learning, though, and there is no guarantee that this particular pressure is the only useful way to achieve sys-gen. In cognitive science, [Raviv et al. (2019)](https://eprints.gla.ac.uk/281145) showed a single-generation model (with its own complexities) can achieve similar results. IL, however, is certainly vital to the approach we explore in this paper (as shown by the improvements over SEM-only in our experiments).
> Q4. I have not seen a clear connection …
To apply IL in a general representation learning system, we must divide the network into an encoder and a decoder, as discussed in section 3.2. We should also discretize the messages; discrete messages allow us to sample pseudo-labels and use cross-entropy loss during the imitation phase, which substantially enhances compressibility pressure, as discussed in Section 4.2 and Appendix B.3.
Previously, [66] used an LSTM encoder and decoder and trained the whole system with REINFORCE for an emergent communication task. However, we found this method to be quite unstable: non-differentiability of the learning objective leads to huge gradient variance. Similar problems exist with other training methods such as Gumbel-Softmax or straight-through estimators. Thinking about this problem led us to SEM, previously proven to be effective on SSL methods [49]. (Interestingly, [Dessì et al. (2021)](https://arxiv.org/abs/2106.04258) drew parallels between Lewis's referential game, used in iterated learning, and SimCLR, a popular SSL method.) Furthermore, as illustrated in Figure 6 of [49], SEM can help to learn representations with higher semantic coherence (i.e., higher topsim).
This gives three different pathways to SEM:
- IL → Lewis’s game → SimCLR → SSL → SEM
- IL → discrete message → differentiable → SEM
- IL → compressibility → high topsim → SEM.
We will emphasize these connections in revision to make it clearer that this did not simply emerge from plugging together random ideas until something worked.
> Q5. The paper mentions …
Quantifying these pressures is important. Besides observing the learning behavior, it could also help to design explicit regularizers to avoid time-consuming iterative training.
For expressivity, the training loss of the downstream task in the interaction phase is a good measurement. If there is no specific target task, cognitive science applications use Lewis’s game, analogous to SimCLR (mentioned above); thus conducting SSL in this phase and using its objectives is also a reasonable choice.
When the ground truth G is accessible, topsim is a reasonable measurement of compressibility pressure. For unknown G, things are more difficult, but there are various possibilities for measuring the “simplicity” based on learning theory or algorithmic information theory. For instance, see the survey of [Jiang et al. (2019)](https://arxiv.org/abs/1912.02178); many of these measures correspond at least loosely to a notion of description length as illustrated in our Table 4. The number of “active bases,” as discussed by [56], is also related to “sparsity” in the function computed by the network, but is nontrivial to measure in complex settings. Learning speed might also be a good measure; it has some support in learning theory, and the integral under the curve of NLL through learning is related to compression ratio (see e.g. [Rae (2023)](https://www.youtube.com/watch?v=dO4TPJkeaaU). Our Figures 10 and 11 also show more compressed mappings are learned faster by practical networks.
---
Rebuttal Comment 1.1:
Title: Reviewer response needed
Comment: Hello Reviewer,
The authors have endeavoured to address your comments in their rebuttal. The rebuttal phase is a key part of the NeurIPS review process. I invite you to read and respond to the author's comments as soon as possible, latest tomorrow, to give everyone time to continue and conclude the discussion.
Thank you for helping make NeurIPS a great conference for our community.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their responses. I have raised my score. | Rebuttal 1:
Rebuttal: We appreciate the reviewers’ feedback and comments, which are quite helpful for us in improving the paper. In this overall response, we summarize some common concerns from different reviewers and provide links to the corresponding responses. Some new experimental results are also discussed in this part.
> Q1. Why do we expect iterated learning to work in deep learning, and why do we choose SEM?
In short, our solution follows 4 steps:
1. From the ladder of systematicity, we expect sys-gen to need a form of simplicity bias (compressibility pressure).
2. Iterated learning can impose compressibility pressure in some simple settings (particularly, with discrete messages).
3. To extend iterated learning to general machine learning problems, we need an encoder-decoder system and discrete message space. We also need the training to be stable.
4. Using SEM in our encoder-decoder helps create a discrete message bottleneck while keeping the whole model differentiable.
There are also more subtle connections between SEM and IL that motivated us to propose this solution; please see our response to reviewer eFnA, under Q4 (as well as Q2/Q3) for more details.
This paper uses three major concepts (sys-gen, IL, and SEM) that are not widely-known to machine learning audiences. Hence, it is not easy to give all of the necessary backgrounds in 9 pages. In the current version, we probably erred too much on the side of describing the context in detail while relegating too much explanation and intuition to the appendix, which made the paper a bit hard to follow. We will do our best in revision to emphasize the main ideas behind these choices (as discussed above) in the main body, while still explaining the necessary background in enough detail.
> Q2. What are the theoretical contributions of this paper?
The theoretical contribution of this paper starts from the formal definition of the sys-gen problem (mentioned in response to Reviewer r5b6-Q1b) and the analysis of the four stages of the systematicity ladder. By analyzing the mutual information among different variables involved in the sys-gen problem, we claim that stage 3→4 needs non-trivial systematicity bias. We then analyze the behavior of Bayesian agents in iterated learning with a manually designed systematicity bias on $P_0(h)$: the behavior of these agents can be theoretically guaranteed (mentioned in response to Reviewer 7owJ-Q1). To find whether this systematicity bias exists in deep learning, we propose Appendix B.3. The KRR analysis adopted from [56] provides a theoretical guarantee of the existence of a specific form of systematicity bias, the number of “active bases”, in a simplified setting.
> Q3. Experiments
Please also refer to the response to Reviewer 7GSE-Q2. This part explains the role played by different experiments in our previous work. We didn’t consider some common vision data like CIFAR or ImageNet, because we found it hard to fit them into the sys-gen problem: they usually contain only one multi-class label, which is hard for us to extract G. Work on disentangled representation learning also rarely considers these datasets. Compared with the vision tasks, we find molecular graph is a good test-bed for sys-gen, as discussed at the beginning of section 5.
For the NLP problems, many traditional solutions already have encoder-decoder structures and discrete messages. There is no need to insert an SEM layer to discretize messages, hence directly applying IL would be fine, as discussed in [54]. For the more advanced decoder-only structure, like GPT, we are still exploring the potential of IL; we tried a small example on GPT2, shown in the attached PDF.
Pdf: /pdf/0a19c219a202055945955bd08afd313555b62e71.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Punctuation-level Attack: Single-shot and Single Punctuation Can Fool Text Models | Accept (poster) | Summary: The paper proposes a novel mode of textual attack, punctuation-level attack, which aims to fool text models by conducting punctuation-level perturbations, including insertion, displacement, deletion, and replacement. This paper also introduces Text Position Punctuation Embedding (TPPE) as an embedding method and Text Position Punctuation Embedding and Paraphrase (TPPEP) as a search method to reduce the search cost and time complexity of determining optimal positions and punctuation types for the attack. The paper demonstrates the effectiveness of the punctuation-level attack and the proposed methods on various tasks such as summarization, semantic-similarity scoring, and text-to-image tasks.
Strengths: 1. The proposed punctuation-level attack expands the scope of adversarial textual attacks. By focusing on punctuation-level perturbations, the authors provide an approach to fooling text models while minimizing its impact on human perception.
2. The proposed methods, TPPE and TPPEP, not only enhance efficiency but also reduce computational costs. Additionally, the authors present a comprehensive mathematical analysis of these approaches.
3. The effectiveness and versatility of the proposed punctuation-level attack are demonstrated by the experimental results on various datasets and state-of-the-art (SOTA) models.
Weaknesses: 1. Whether LLMs can also be fooled, which should be discussed in the paper
I guess such robustness is due to the amount of training data is not large enough. It is curious that LLMs like ChatGPT will still fall into such a deficiency, since LLMs are trained on huge data.
2. Why PLMs fail on punctuations is not discussed
The punctuation-level attack does not surprise me a lot. The most interesting problem is to probe into the reason why PLMs can be fooled by punctuations.
Unfortunately, this is not in the paper, which large limits the contribution of the work.
3. The defense is not discussed
The authors do not provide the study on how to defense the punctuation-level attack. The contribution on how to enhance language models e.g. DeBERTa to be robust against punctuation attacks is much more significant than how to attack. To me, this is very important for the community to improve real-world systems against underlying attackers.
4. The attack success rate is only promising on CoLA, while limited on the other datasets.
However, CoLA is not suitable dataset for evaluation.
CoLA requires the model to decide whether the given sentence is linguistically correct. The manipulation of punctuations can spoil the label of the original sentence.
5. Punctuation modifications can change the original label on some task
Punctuation-level attack is safer than word-level modification. It still can change the original label on some task, e.g. CoLA. Even in extreme cases, punctuations can change the entire semantics. It is better for the authors to discuss this part in the paper, e.g. some bad cases. This is also important for future research.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How do you get the E_text, and E_punc? Do you use pretrained models or a finetuned one on the target task? The embeddings you extract are from the embedding layer or the last layer of the transformer blocks?
For E_punc, do you just input a single punctuation as the complete sentence into the model? If so, this is quite strange.
What are the dimensionalities for E_text, E_pos and E_punc and how can they be concatenated or added up?
You mentioned “We transform the attacking task into a paraphrasing task.” In line 269 when introducing TPPEP. What is the model used to conduct this paraphrasing task?
You mentioned “The adversarial candidate text with the highest paraphrasing score calculated by 272 the TPPEP method is chosen to deploy the attack.” In line 271-272. It seems that you need to do several queries and give them a rank. Is this contradictory to the O(1) complexity you claimed?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: see the weakness part
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Whether LLMs can also be fooled
A1: Due to time constraints, we focused our efforts on the summarization task using ChatGPT, limiting our use to a mere 20 samples per task. Our attack method involved the insertion of individual punctuation. Following the application of this attack strategy, the ROUGE-1 score exhibited a decrease from 22.8 to 16.7. Despite this decline, we assert that punctuation-level attacks remain effective for complex tasks such as generating task. Our experimental outcomes illustrate the vulnerability of even extensively trained Stable-diffusion models to punctuation-based attacks. Additionally, existing literature has already demonstrated the susceptibility of LLMs to adversarial attacks on textual data.
Q2: Why PLMs fail on punctuations
A2: Analying the post-attack data distribution highlights the text altered by punctuation-based attacks as an Out-of-distribution (OOD) dataset for pre-trained language models (PLMs). For instance, in the Replacement attack method, instances involve replacing '!' with '!', '?' with '?', and ',' with ','. Given the rarity of Chinese punctuation marks in the training data of English-based PLMs, sentences transformed by punctuation-based attacks become OOD samples for PLMs. Consequently, this contributes to reduced generalization performance of PLMs on such data. This aspect underlies the ineffective punctuation handling by PLMs.
In practical applications of PLMs, the prevailing approach involves training PLMs through fine-tuning. During the fine-tuning phase for downstream tasks, the training datasets for these tasks exhibit a lower proportion of punctuation errors. In theory, the finetuning process for downstream tasks could render the model more susceptible to OOD attacks, thereby further compromising the generalization capacity of PLMs for sentences that have undergone punctuation-level attacks.
Q3: The defense is not discussed
A3: In our global rebuttal, we have incorporated experimental outcomes to supplement our defense strategy.
We selected Preceding Language Modifier and adversarial training as defense. The efficacy of the Preceding Language Modifier and adversarial training are demonstrated in Table 6 and Table 8.
Q4: The attack success rates limited.
The experimental results presented in this paper are obtained under highly challenging conditions, specifically the most extreme scenarios within the experiment setup. For instance, the experiments involve perturbing words to zero, conducting single punctuation attacks, allowing only a single query, and considering models that provide hard-label outputs in black-box attack scenarios. This deliberate choice of experimentation parameters has led to constrained outcomes. In Table 8, we have exclusively constrained the number of perturbed words for other benchmark methods. Despite the potential for other benchmark methods to access the model multiple times, obtaining posterior probabilities from black-box model outputs, and even in the context of white-box attacks, our TPPEP algorithm has demonstrated superior performance in terms of fool rate. Notably, across metrics including Perturbed Words (%), Semantic Similarity, and Number of Queries, TPPEP has consistently achieved wonderful results across the entire dataset when compared to these benchmarks.
We have also investigated the impact of increasing the number of model queries on the fool rate. Our findings reveal that when the number of qurried increase to five queries (TOP-5), TPPEP achieves state-of-the-art misclassification rates across all datasets.
Q5: Punctuation modifications can change the original label and the entire semantics on some task.
In the revised edition, we would employ other text classification datasets to substitute cola
In Table 8, we delve into the alterations in semantics resulting from punctuation-level attacks. We computed an average semantic similarity of 0.995526417 before and after punctuation attacks. The specific outcomes are in Table 8. Our current experimental findings suggest that our punctuation-level attack algorithm has a limited impact on semantics.
In the revised version, we will incorporate modifications to prevent the emergence of extreme cases. Specifically, for the set of perturbed candidate texts, we will introduce a semantic similarity threshold. Any candidate text falling below this threshold will be excluded from consideration as a potential attack candidate.
Q6: Questions about E_text, and E_punc
We trained a substitute model and employed the embeddings of its final layer as the output. For both individual text and standalone punctuation inputs, we fed them into the substitute model, extracting the outcomes from its final layer, denoted as E_text and E_punc, respectively.
Given the pivotal role that punctuation types and placements play in attacks at the punctuation level, we sought to underscore the significance of punctuation. Consequently, we deliberately employed a singular punctuation mark as an entire sentence input into the model.
The dimensions of E_text, E_pos, and E_punc are all set to 50. We opted for the addition method.
Our TPPEP method draws inspiration from paraphrasing tasks, which involve assessing the relationship between two given texts. In the case of TPPEP, we leverage pre- and post-adversarial attack embedding vectors to determine the success of the attack.
During the testing phase, a single access to the substitute model is sufficient to obtain the ranking of the perturbed text, which indicates our time complexity of O(1).
Further elaboration on the embedding process and complexity is provided in the supplementary materials. Due to the word limitations, please refer to our supplementary materials for additional details. | Summary: By proposing a new type of adversarial attacks, i.e., the punctuation-level attacks, This paper can fool text models with less impact on the semantic information by human beings understanding. Its effectiveness is verified by experimental results on various datasets/tasks and victim models. What’s more, the attack method is accelerated by the proposed Text Position Punctuation Embedding and Paraphrase (TPPEP) approach, so that the attack can be accomplished with constant time cost. The efficiency has been demonstrated by the experimental studies.
Strengths: + This paper is well written and easy to follow.
+ This reviewer appreciates the novelty of the proposed punctuation-level attacks, and the motivation behind, i.e., attacking text models with minimal perceptual influence on human eyes. It indeed brings some insights.
+ Besides its effectiveness in fooling various textual models, the authors also propose a TPPEP method to accelerate the attacking procedure. I believe this can significantly improve the practical use of the punctuation-level attacks.
+ The attack results on the update-to-date Stable Diffusion model are quite interesting.
Weaknesses: - Though the proposal of punctuation-level attacks is indeed well motivated, an obvious major concern is raised that only three tasks (thus three types of victim models) are selected for attack effectiveness evaluation in the experimental studies. It would be better if more methods are selected for evaluation. The proposed approach would be fully validated and the conclusion would be more convincing.
- The current experimental results fail to provide more insights on how the method work in different scenarios. In fact, as a new type of text-attack method, there should be more in-depth analyses and explanations to quantitatively or qualitatively show the effectiveness.
- The notations in the paper are not always consistent. And some parts are not well illustrated, especially in Sec. 3.3. It is suggested to be re-organized.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: As I stated, the results of text-to-image task are interesting, while it is not so convincing. At the least, how can we know if the attack is successful or not, by human eyes? There should be more convincing validation.
Overall, I like the idea in the paper and the current drawback is the results. Please consider my suggestions about the experimental studies. I am open to increasing my score if my concerns are satisfactorily addressed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The current discussion about the limitations is practical but not so comprehensive. For example, the resource required by the method is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: It would be better if more methods are selected for evaluation.
A1: Tables B, C, and D present the results of our endeavor to extend punctuation-level attacks, TPPE, and TPPEP algorithms to encompass three distinct tasks: summarization, semantic similarity scoring, and text-to-image generation.
In the context of the summarization task, we employ the ROUGE-1 metric as the dependent variable y and formulate a predictive model to estimate the ROUGE-1 score for the test set, and subsequently, we select the candidate attack text characterized by the lowest predicted ROUGE-1 value.
For the text-to-image task, we utilize the CLIP score as y and identify the candidate attack text by determining the text associated with the smallest predicted CLIP score value.
In the semantic similarity scoring task, our y corresponds to the semantic similarity between the text before and after the attack. To model this, we develop a predictive model to estimate the semantic similarity values for the test set.
Table B The results of text to image task
| | Without Attack | TOP-1 | TOP-3 | TOP-5 | Traveral |
| ---------- | -------------- | ------ | ------ | ------ | -------- |
| clip-score | 0.3278 | 0.3176 | 0.3069 | 0.3022 | 0.2610 |
Table C The results of summarization
| | Without Attack | TOP-1 | TOP-3 | TOP-5 | Traveral |
| ------- | -------------- | ----- | ----- | ----- | -------- |
| ROUGE-1 | 11.69 | 10.91 | 9.65 | 9.11 | 5.22 |
Table D The results of semantic-similarity-scoring
| | Sentence-BERT | Distilbert | | |
| -------------- | ------------- | ---------- | ------- | -------- |
| ST12 | Peaeson | Spearman | Peaeson | Spearman |
| Without Attack | 0.7990 | 0.6988 | 0.8056 | 0.7257 |
| TOP-1 | 0.7874 | 0.6862 | 0.7902 | 0.7035 |
| TOP-3 | 0.7760 | 0.6738 | 0.7759 | 0.6990 |
| TOP-5 | 0.7654 | 0.6626 | 0.7649 | 0.6745 |
| Traveral | 0.6992 | 0.5832 | 0.6694 | 0.6048 |
The attack methodologies showcased in Tables B, C, and D predominantly encompass insertion-based techniques. Due to temporal constraints, we intend to incorporate the outcomes of alternative attack approaches in the revised version of this paper.
Moreover, in order to comprehensively assess the effectiveness of our attack algorithm, we have included several additional evaluation metrics in Table 8 of the global rebuttal. In addition to the fooling rate, we have incorporated metrics such as Perturbed Words, Semantic Similarity, and Average Number of Queries. At present, our TPPEP algorithm has established state-of-the-art results across these metrics.
Q2: As a new type of text-attack method, there should be more in-depth analyses and explanations to quantitatively or qualitatively show the effectiveness.
A2: In the global rebuttal, we have included discussions on defense mechanisms, comparative outcomes with other algorithms, and more. Our intent is for these experimental findings to offer additional insights into punctuation-level attacks.
Q3: The notations in the paper are not always consistent.
A3: We greatly appreciate your valuable suggestions. We will meticulously review the notations throughout the entire manuscript. In the revised version, we will also reorganize Section 3.3 in accordance with your recommendations.
Q4:As I stated, the results of text-to-image task are interesting, while it is not so convincing.
In the global rebuttal, we present the latest experimental outcomes for the text-to-image task. We utilize the CLIP score as the metric for evaluating attacks and leverage the Pokemon Blip Captions dataset, splitting it into training and testing sets in an 8:2 ratio.
In the supplementary materials of the global rebuttal, we present the latest experimental findings pertaining to the text-to-image domain. Employing the clip-score as the metric for assessing the attacks, the outcomes within Table 3 elucidate the impact of attacks conducted at the punctuation level. Notably, following these attacks, the clip-scores experienced a reduction from 0.3281 and 0.4040 to 0.2484 and 0.3468, respectively.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the effort in providing the rebuttal. In general, I think my concerns are well solved, especially the experimental study issue. The paper quality will be improved by addressing the concerned issues in the revision. I have raised my rating a bit accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thanks a lot for your pertinent feedback and suggestions, and your reconsideration of our work. We will take all the suggestions/requests from all the reviewers/chairs into consideration to make the paper more solid. | Summary: This paper introduces a new approach to textual attacks called the punctuation-level attack. The method aims to fool text models while minimizing its impact on human perception and understanding. The paper discusses the effectiveness of this attack strategy, presents a search method to optimize its deployment, and provides experimental results showcasing its success. The authors also apply the single punctuation attack to summarization, semantic-similarity-scoring, and text-to-image tasks, achieving encouraging results. The paper concludes that the punctuation-level attack is more imperceptible to human beings and has less semantic impact compared to traditional character-/word-/sentence-level perturbations. The integrated Text Position Punctuation Embedding (TPPE) allows the punctuation attack to be applied at a constant cost of time. The experimental results on public datasets and state-of-the-art models demonstrate the effectiveness of the punctuation attack and the proposed TPPE.
Strengths: 1. The paper introduces a new approach to textual attacks called the punctuation-level attack, which is different from traditional character-/word-/sentence-level perturbations.
2. The punctuation-level attack is designed to be more imperceptible to human beings and has less semantic impact compared to traditional perturbations.
3. The paper presents a search method called Text Position Punctuation Embedding (TPPE) to optimize the deployment of the punctuation-level attack.
4. The paper provides experimental results showcasing the effectiveness of the punctuation-level attack and the proposed TPPE on public datasets and state-of-the-art models.
Weaknesses: The adversarial attacks discussed in this paper can be categorized as non-pure white-box attacks, as the attack objective may differ from the model's evaluation metric. It is crucial to explicitly acknowledge this fact in the paper, as it is widely recognized that achieving white-box robustness represents an upper bound and is significantly more challenging than black-box robustness.
It appears that the number of attack iterations is restricted. To ensure robustness evaluation, it is advisable to ensure attack convergence by employing an adequate number of iterations.
In the experimental section, when comparing the proposed method with other approaches, a fixed number of updating steps is consistently utilized.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper lacks specific information regarding the implementation details of adversarial training. It does not provide explicit explanations regarding the ratio of clean versus adversarial samples used during training, nor does it clarify whether all methods employ identical training strategies.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The main emphasis of the paper is placed on evaluating the punctuation-level attack's efficacy within specific tasks, including summarization, semantic-similarity-scoring, and text-to-image tasks. However, the evaluation of this attack is not comprehensive across a diverse set of NLP tasks, limiting the extent to which the findings can be generalized.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1:The adversarial attacks discussed in this paper can be categorized as non-pure white-box attacks.
A1:We greatly appreciate your suggestions, and we will incorporate the necessary revisions in the revised version of the paper. This aspect will be emphasized in the revised version as well.
Q2:It appears that the number of attack iterations is restricted. To ensure robustness evaluation, it is advisable to ensure attack convergence by employing an adequate number of iterations.
A2:The outcomes of multiple attacks are presented in Table 12. We observed a significant increase in the fooling rate of the model following multiple attacks. The fool rates are gradual convergencing. Additionally, we noted that beyond a certain threshold of attack iterations, the magnitude of the fooling rate enhancement becomes relatively stable. In such cases, we posit that it is appropriate to proceed with the next punctuation-based attack process.
| Cola | | Top-10 | Top-20 | Top-30 |
| ------------ | ------------ | ------ | ------ | ------ |
| | Insertion | 74.59% | 75.17% | 76.51% |
| ELECTRA | Displacement | 77.89% | 78.93% | 79.41% |
| | Replacement | 50.60% | 59.31% | 63.92% |
| XLMR | Insertion | 76.51% | 83.41% | 85.81% |
|
| | Replacement | 15.20% | 18.14% | 19.61% |
| QQP | Insertion | 26.86% | 30.61% | 32.68% |
| DistillBERT2 | Displacement | 23.46% | 26.16% | 26.67% |
| | Replacement | 14.77% | 16.87% | 17.93% |
| QQP | Insertion | 29.21% | 31.81% | 35.55% |
| DistillBERT2 | Displacement | 19.88% | 22.50% | 23.09% |
| | Replacement | 24.46% | 26.48% | 27.54% |
| | Insertion | 37.02% | 45.36% | 49.60% |
| RoBERTa | Displacement | 15.96% | 20.41% | 23.07% |
| | Replacement | 25.58% | 33.11% | 36.89% |
| | Insertion | 44.70% | 53.00% | 56.00% |
| deBERTa | Displacement | 26.74% | 32.65% | 35.41% |
| | Replacement | 22.51% | 29.34% | 33.97% |
Q2:In the experimental section, when comparing the proposed method with other approaches, a fixed number of updating steps is consistently utilized.
A3:Throughout the main text, supplementary materials, and in all experiments included in the global rebuttal submission, we consistently employed an identical number of updating steps.
Q3: The paper lacks specific information regarding the implementation details of adversarial training.
A4:In our adversarial training, we maintain a 1:1 ratio between clean and adversarial samples during the training process. Specifically, we employ a learning rate of 0.0002 for our adversarial training, spanning 27 epochs. The optimization process utilizes the cross-entropy loss function employing the Adam optimizer. It is noteworthy that all methodologies adhere to identical adversarial training strategies.
Q4: The evaluation of this attack is not comprehensive across a diverse set of NLP tasks
Tables B, C, and D present the results of our endeavor to extend punctuation-level attacks, TPPE, and TPPEP algorithms to encompass three distinct tasks: summarization, semantic similarity scoring, and text-to-image generation.
In the context of the summarization task, we employ the ROUGE-1 metric as the dependent variable y and formulate a predictive model to estimate the ROUGE-1 score for the test set, and subsequently, we select the candidate attack text characterized by the lowest predicted ROUGE-1 value.
For the text-to-image task, we utilize the CLIP score as y and identify the candidate attack text by determining the text associated with the smallest predicted CLIP score value.
In the semantic similarity scoring task, our y corresponds to the semantic similarity between the text before and after the attack. To model this, we develop a predictive model to estimate the semantic similarity values for the test set.
Table B The results of text to image task
| | Without Attack | TOP-1 | TOP-3 | TOP-5 | Traveral |
| ---------- | -------------- | ------ | ------ | ------ | -------- |
| clip-score | 0.3278 | 0.3176 | 0.3069 | 0.3022 | 0.2610 |
Table C The results of summarization
| | Without Attack | TOP-1 | TOP-3 | TOP-5 | Traveral |
| ------- | -------------- | ----- | ----- | ----- | -------- |
| ROUGE-1 | 11.69 | 10.91 | 9.65 | 9.11 | 5.22 |
Table D The results of semantic-similarity-scoring
| | Sentence-BERT | Distilbert | | |
| -------------- | ------------- | ---------- | ------- | -------- |
| ST12 | Peaeson | Spearman | Peaeson | Spearman |
| Without Attack | 0.7990 | 0.6988 | 0.8056 | 0.7257 |
| TOP-1 | 0.7874 | 0.6862 | 0.7902 | 0.7035 |
| TOP-3 | 0.7760 | 0.6738 | 0.7759 | 0.6990 |
| TOP-5 | 0.7654 | 0.6626 | 0.7649 | 0.6745 |
| Traveral | 0.6992 | 0.5832 | 0.6694 | 0.6048 |
The attack methodologies showcased in Tables B, C, and D predominantly encompass insertion-based techniques. Due to temporal constraints, we intend to incorporate the outcomes of alternative attack approaches in the revised version of this paper.
Moreover, in order to comprehensively assess the effectiveness of our attack algorithm, we have included several additional evaluation metrics in Table 8 of the global rebuttal. In addition to the fooling rate, we have incorporated metrics such as Perturbed Words, Semantic Similarity, and Average Number of Queries. At present, our TPPEP algorithm has established state-of-the-art results across these metrics. | Summary: This paper introduces an adversarial attack against NLP models based on punctuation perturbations. The authors introduce an attack called Text Position Punctuation Embedding (TPPE) that comprises an insertion, displacement, deletion, and replacement attack based on textual punctuation (e.g., commas or periods).
Experiments are conducted on various datasets, ranging from text classification (CoLA) to paraphrasing (QQP) and natural language inference (WANLI). The attacks are applied to ELECTRA, XLMR, and BERT-based models (DistilBERT, RoBERTa, DeBERTa). Additionally, the attack is applied to semantic-similarity-scoring (STS12), summarization (gigaword), and text-to-image tasks (prompting Stable Diffusion V2). Experimental results are promising, showing that the attack can be used to successfully attack models for the above tasks.
Strengths: * The paper provides an extensive analysis of punctuation-level attacks against NLP models. These attacks are promising since they have the potential to be less perceptible as compared to existing character-, word-, and sentence-level attacks.
* The analysis is extensive in that multiple NLP tasks (classification, summarization, text-to-image, etc.) are analyzed.
* It is interesting to see that the investigated models are vulnerable to punctuation-level attacks across tasks and domains.
Weaknesses: * The paper does not compare TPPE against existing works and baselines. In Section 2, the authors point out various existing works focusing on punctuation attacks. However, none of these works have been evaluated and compared against in their experimental settings. To identify and support the strengths and utility of TPPE, such experiments are essential.
* I additionally think that comparisons to character-, word, and sentence-level attacks on the selected datasets would have been insightful since these experiments would provide the reader with a better understanding of how punctuation-level attacks perform in comparison to attacks focusing on other parts of a textual sequence.
* The paper does not further analyze the semantics of perturbed adversarial examples. To support the claims of semantic imperceptibility, human experiments analyzing the change in semantics between an original sequence and its adversarial counterpart would be important. The examples in Figure 2 nicely illustrate that inserting single punctuation marks can substantially impact the meaning of a sequence. Since adversarial examples are desired to preserve the semantics of an attacked sequence, quantitative experiments would be needed to evaluate TPPE in that context.
* The paper does not discuss potential approaches to mitigate the models’ vulnerability against punctuation-level attacks, for instance by assessing whether adversarial training / data augmentation (i.e., training the model on adversarially perturbed sequences) can help increase the robustness of the attacked models. This would provide additional insights into how robust the attack is, and how it can potentially be defended against.
* The results for the text-to-image task consist only of two qualitative examples. These examples are highly interesting, but to better evaluate the vulnerability of Stable Diffusion models against such attacks, quantitative results over a larger dataset would be important.
* Overall, the paper focuses too much on introducing the attack and discussing its details and time-complexity analysis, instead of extensively evaluating its performance (the Experiments Section spans under 2 pages in the manuscript).
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Could you provide a few additional details on model training / fine-tuning, as well as the dataset splits that you used for attacking the models? This is not mentioned in Section 4.1.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper briefly discusses Limitations in Section 5. However, potential ethical considerations arising from this research remain unaddressed. Since discussing these is quite important in this context (the proposed attack can be misused for malicious purposes), I would encourage the authors to add a section for this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1:Compare punctuation-level attack to the existing works focusing on punctuation attacks
A1:First and foremost, it is imperative to clarify that the existing research concerning punctuation attacks does not pertain to the punctuation-level attack. The works referenced as [18, 16] are essentially char-level attacks. Their approach involves the insertion of punctuation marks within words and at word endings, leading to the transformation of the original word into an OOV word(e.g., "lo,ve" and "love,"). Similarly, the study denoted as [24] simply appends fifty tildes at the sentence conclusion,which has perturbed too much to be suitable as a benchmark method.
The comparative summary of our findings against the outcomes delineated in references [18, 16] is presented on Table 8 in the pdf of global rebuttal. “Hossein” and “Nota” are the method of [18,16] respectively. The results of Table 8 indicate TPPEP method is better than “Hossein” and “Nota” are the method.
Q2:Compare punctuation-level attack to the char-, word-, sentence-level attacks
The comparative results between our approach and the char- and word-level attack methods are presented in Table 8. The sentence-level attacks will attack the whole sentence, which has perturbed too much to be suitable as a benchmark method. The results of Table 8 indicate TPPEP method is better than other benchmark methods of char-level and word- level method.
In this experiment, TPPEP perturbed only one punctuation mark. In order to make a fair comparison between TPPEP and other attack algorithms, we limit the other attack algorithms to only perturbing one word. So we didn't select SSTA[24] and sentence-level attacks as benchmarks.We have indeed computed the results of both SSTA [24] and the sentence-level method. If necessary, we can present these results in the discussion. Due to the word limitations, we do not present here.
Q3: Analyze the semantics of perturbed adversarial examples
In Table 8 of global rebuttal, we delve into the alterations in semantics resulting from punctuation-level attacks. We computed an average semantic similarity of 0.995526417 before and after punctuation attacks. The specific outcomes for each dataset, model, and attack technique are detailed under the "Semantic Sim" criterion in Table 8. Our current experimental findings suggest that our punctuation-level attack algorithm has a limited impact on semantics.
In addition, we have incorporated human evaluations to assess the semantic similarity before and after attacks. We randomly selected 100 sentences along with 15 participants from each of the three datasets, alongside their respective perturbed texts. A five-point scale was established, where a score of 1 indicates substantial dissimilarity between the two sentences, and 5 indicates substantial similarity. The computed mean similarity score is 4.93.
Q4: The defense is not discussed
In our global rebuttal, we have incorporated experimental outcomes to supplement our defense strategy. We introduce a defense technique based on the concept of a Preceding Language Modifier. The core objective of the Preceding Language Modifier is to restore the attacked text to its original, coherent form. Such rehabilitated text poses no threat to well-established models. The efficacy of the Preceding Language Modifier is demonstrated in Table 6, revealing its capacity to effectively mitigate the impact of punctuation-based attacks.
We have also investigated the impact of adversarial training on punctuation attacks, and the experimental findings have been presented in Table 7. The results demonstrate that adversarial training effectively mitigates the effects of punctuation-level attacks without compromising the model's accuracy
Q5: The results for the text-to-image task have not been discussed detailedly
In the global rebuttal, we present the latest experimental outcomes for the text-to-image task. We utilize the CLIP score as the metric for evaluating attacks and leverage the Pokemon Blip Captions dataset, splitting it into training and testing sets in an 8:2 ratio.
We present the latest experimental findings pertaining to the text-to-image domain. Employing the clip-score as the metric for assessing the attacks, the outcomes within Table 9 elucidate the impact of attacks conducted at the punctuation level. Notably, following these attacks, the clip-scores experienced a reduction from 0.3281 and 0.4040 to 0.2484 and 0.3468, respectively.
Q6: The paper focuses too much on introducing punctuation-level attack
Diverging from the extensively explored domains of char-, word-, and sentence-level attacks, we introduce a pioneering approach focusing on attacks at the punctuation-level. So we believe it is necessary to details of the attack.
We have conducted supplementary experiments, which are presented both in the global rebuttal section and earlier in this document.
We greatly appreciate the insightful suggestions you provided regarding our experiments. We will effectuate revisions to the paper by incorporating additional experimental segments while judiciously trimming the content related to attack intricacies and time complexity discussions.
Q7:Details of the experiment were not specified
We have provided detailed information about training the substitute model in the supplementary materials (Section 1.2 in the supplementary materials). When training the TPPEP model, the training data was split into an 8:2 ratio for the training set and validation set, respectively. For training the TPPEP algorithm, a batch size of 300 was utilized over 9 epochs, employing the Adam optimizer with a learning rate of 0.001, and the loss function employed was the Cross-Entropy Loss.
Q8:Potential ethical considerations arising from this research remain unaddressed
We greatly appreciate your suggestions. In the revised version, we will incorporate a dedicated section to discuss potential ethical considerations.
---
Rebuttal Comment 1.1:
Title: Thanks to the authors!
Comment: I greatly appreciate the detailed response and provision of additional results! I have raised my score accordingly.
However, given that the authors added a large number of additional results which would need to be incorporated into the manuscript (and will hence likely change central parts of the paper), I believe that this paper could also benefit from an additional round of reviews. I am happy to further discuss this with other reviewers and ACs.
---
Reply to Comment 1.1.1:
Title: The central part does not change
Comment: Dear Reviewer LLfX,
Thanks a lot for your reply and your acknowledgment of our effort in the rebuttal. And we also appreciate your updated evaluation of our work.
In terms of your concern, we want to emphasize that, the most significant contribution of this paper, is still the first proposal of the punctuation-level attack and its associated search method to accelerate the attack. So, the central part does not change even considering the new results and discussion in the rebuttal.
We appreciate the insightful feedback from all reviewers, and we will accordingly update the manuscript. Primarily, we will consider the necessary discussion in terms of defense and in-depth analysis of punctuation-level attacks, and the most important results in the revision. Due to page limitations, all the other materials and experimental results will be included in the Appendix. We hope this solves your concern well. | Rebuttal 1:
Rebuttal: Upon receiving the reviewers' feedback, several sections have been incorporated to refine the original paper.
# **1 Defense Methord**
We have introduced an in-depth discourse concerning defense strategies aimed at countering
punctuation-level attacks. In the realm of real-world systems, we meticulously investigate a range
of defense approaches both pre- and post-completion of training.
## 1.1 **Preceding Language Modifier**
For after-trained models, initiating a retraining process with adversarial training methods is evidently costly and impractical. To address this scenario, we propose a modifier that aims to restore the text to its original form as much as possible following punctuation-level attacks. An alternative approach that involves training a seq2seq model using both punctuated and original texts post-attack could be considered. However, due to time constraints, we employ prompt learning using the grammatically enhancing large language model CoEdIT-xxl (11 B) as the modifier. We opt to conduct experiments on the most vulnerable CoLA dataset and the two strongest attack mode , e.g. Insertion and Displacement. The experimental outcomes, as depicted in Table 6, demonstrate the favorable performance of our modifier strategy. As time limitations apply, results for other datasets will be presented in the revised version of this work.
## **1.2 Adversarial Training**
For untrained models, initiating a retraining process with adversarial training methods is evidently costly and impractical. To address this scenario, we propose a modifier that aims to restore the text to its original form as much as possible following punctuation-level attacks. An alternative approach that involves training a seq2seq model using both punctuated and original texts post-attack could be considered. However, due to time constraints, we employ prompt learning using the grammatically enhancing large language model CoEdIT-xxl (11 B) as the modifier. We opt to conduct experiments on the most vulnerable CoLA dataset. The experimental outcomes, as depicted in Table 6, demonstrate the favorable performance of our modifier strategy. As time limitations apply, results for other datasets will be presented in the revised version of this work.
# 2 Comparision to Benchmarks
We conducted a comparative analysis against alternative attack algorithms, namely TextFooler, Bert-attack, and DeepWordBug, which were chosen as benchmarks.And Fool Rate , Perturbed Words (%) , Semantic Sim, and Number of queries are Selected as evaluation metrics. T5-large pre-train model is selected as the tool to caculate Semantic Sim. And the “Label” indicate the output of query is hard-label, and “score” indicate the output of query is predict probabilities.
As the TPPEP algorithm perturbs solely a single punctuation mark, for the purpose of a fairer comparison, we imposed the constraint that other algorithms are also allowed to target only one word, albeit with the ability to attack this word multiple times. The experimental findings, presented in Table 8, demonstrate that the TPPEP has consistently achieved state-of-the-art results across various metrics, including Fool Rate, Perturbed Words, Semantic Similarity, and Average Number of Queries. Evidently, our approach at the punctuation level attains improved fool rates in scenarios involving zero-word perturbation, single-query access, preservation of semantic information, and reduced perceptibility, when compared to alternative algorithms.
# 3 **Experiments On Text-to-Image Task**
We have expanded the quantitative experimentation section for T2I applications, employing the clip-score as our evaluation metric. The experimental outcomes are presented in Table 9, where "ori-text1" refers to the sentence "a corgi is playing piano, oli on canvas". Upon inserting a comma for punctuation-level attack, "adv-text1" is obtained from "ori-text1". And "ori-text0" refers to the sentence “a professional photograph of an astronaut riding a triceratops”Notably, with a high semantic similarity of 0.9843 between "ori-text1" and "adv-text1", the clip-score decreases from 0.4040 to 0.3468 post-attack.
Additionally, in order to assess the robustness of the Stable Diffusion model against punctuation-level attacks, we employ the "pokemon-blip-captions" dataset for evaluating the attack's impact. This dataset comprises a total of 832 texts in its training and testing subsets. Following punctuation insertion attacks, the overall clip-score declines from 0.3273 to 0.2590. This reduction is further reflected in both the training and testing subsets, underscoring the model's vulnerability.
# 4 Analysis of Punctuation-Level Attacks
We examine the factors influencing the success rate of punctuation-based attacks. In our textual adversarial perturbations, punctuation information and its positional context are essential components. Thus, we delve into the impact of positional information and punctuation characteristics on the average Fool Rate.
Table 10 illustrates the effects of various punctuation types on the Fool Rate, detailing the top eight punctuation marks with the highest average misclassification rates. The "?" symbol emerges as the most influential, exhibiting an elevated average misclassification rate of 8.36%. This signifies that, across the cola, QQP, and wanli datasets, as well as their respective six models, when the "?" symbol is iteratively inserted, deleted, replaced, or displaced in a specific sample, the average probability of successful fooling reaches 8.36%.
In Table 11, the term "preceding" pertains to the initial one-third position of the sentence, while "subsequent" refers to the final one-third position of the sentence. Concerning insertion attacks, the most effective outcome is observed when the middle portion of the inserted sentence is targeted. In the context of replace and displace attacks, the subsequent segment of the attacked sentence yields optimal results.
Pdf: /pdf/8a9af0f66afce6faa0f575476bc02ae45da47d8e.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adaptive recurrent vision performs zero-shot computation scaling to unseen difficulty levels | Accept (poster) | Summary: The paper introduces a combination of Convolutional Recurrent Neural Networks (ConvRNNs) with a learnable termination mechanism from Adaptive Recurrent Neural Networks (AdRNNs), with the purpose of solving complex, variable-difficulty vision tasks. The paper adapts to the purpose a ConvGRU architecture, or alternatively a “LocRNN” architecture based on a computational model of iterative contour processing in primate vision. Both architectures, together with a ResNet baseline, are tested on the “Pathfinder” and “Mazes” tasks, with a variety of task difficulties. Importantly, the authors show that the AdRNNs architecture are able to generalize to task difficulties greater than those seen during training, requiring a number of computation steps beyond those used in training.
Strengths: The paper consists of an interesting and timely investigation into the hot topic of adaptive computation for problem solving.
While the authors are not the first to work on this, they appear to be the first to have demonstrated that RNN models can be effectively applied for vision tasks with adaptive difficulty, and generalize to unseen difficulty levels.
The work is clearly motivated, and the overall story and contribution is immediately apparent.
Weaknesses: I believe that the following points need to be addressed for my assessment of the paper to be changed more positively:
* **[a]** Overall, the main contribution of the paper is the introduction of LocRNN, as all results, including the one on generalization to unseen difficulty levels for tasks, are downstream of LocRNN being a better architecture than alternatives. However, Section 4.3 appears crowded and hard to parse. I believe that the inclusion of figures illustrating the architectures and describing the differences between ConvRNN and LocRNN would greatly improve the presentation. Moreover, not much effort is spent intuitively justifying the reasons behind the improved performance of LocRNN over other ConvRNNs.
* **[b]** In Line 197, it is mentioned that the computational models of cortical recurrent processing are described in detail in Supp. Info, but there is no such section in the supplementary material. Please include this section, to again better explain the intuition behind the introduction of LocRNN.
* **[c]** There are small formatting issues. In Line 40, citation names are used without brackets even when they don’t naturally flow within the sentence.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: What is the computational model of cortical processing relevant to the introduction of LocRNN? What is the intuition behind LocRNN’s improved performance and stability over other ConvRNNs?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The authors seem to have fairly acknowledged the limitations of their work in their “Limitations” section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We take the PDEs proposed in (Li, 1998), which model the connections between V1 neurons in continuous time. We take these PDEs, apply Euler integration to them in order to convert them into discrete-time difference equations and implement them using convolutional Recurrent Neural Networks. The final RNN obtained from Li 1998 after Euler integration is what we refer to as LocRNN in our submission. We have contrasted the architectures between LocRNN and GRU in Fig 8.
We will include the original PDEs and the derivations which we missed in the Supplementary as well as fix the formatting issues.
References:
Tay et al. (2020), Tay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., ... & Metzler, D. (2020). Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006.
---
Rebuttal Comment 1.1:
Title: Reply to the Authors
Comment: We thank the authors for having included an illustration of LocRNN in the rebuttal pdf, and having clarified the origins of the architecture. I consider these points necessary for my score to remain a tentative accept. | Summary: Authors combine Adaptive computation time (ACT) with convolutional recurrent neural networks to solve two tasks with which their generalization properties can be studied. These adaptive-timestep RNNs were found to halt quicker for easier problems while taking more steps for harder ones. This also leads to generalization to an unseen difficulty level.
Strengths: The paper is well written and the results are convincing for the claim.
* Pathfinder task is known to be challenging (Tay et al., 2020) so high accuracy and generalization on that task is very interesting.
* Adaptive choosing of timesteps within a dataset is important & valuable for neuroscience and cognitive science research since it allows for identification of harder and easier stimuli.
* Inference-time timesteps being more than training is a strong indicator that the RNNs are generalizing.
Weaknesses: While I found the work interesting, I found the paper to be severely lacking in terms of novelty. The paper also misses some related works ([1], [2]).
* As mentioned in the paper, Bansal et al., (2022) have studied this with the maze task. Authors say "recurrent networks used in their study are not adaptive, human intervention is required to specify the number of recurrent computational steps by brute force during the testing phase" which I do not agree with - since their RNN stops changing its response after a while, that point can be used as to decide when to stop.
* [2] has studied this with Pathfinder task - even finding generalization to difficult levels.
Between Graves et al. (2016), Bansal et al. (2022) and [2] I do not see the novelty of this work.
1. Spoerer CJ, Kietzmann TC, Mehrer J, Charest I, Kriegeskorte N (2020) Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision. PLOS Computational Biology 16(10): e1008215. https://doi.org/10.1371/journal.pcbi.1008215
2. Drew Linsley, Alekh Karkada Ashok, Lakshmi Narasimhan Govindarajan, Rex Liu, Thomas Serre (2020) Stable and expressive recurrent vision models. NeurIPS 2020
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I am curious to hear why the authors say RNNs used in Bansal et al., (2022) "are not adaptive, human intervention is required to specify the number of recurrent computational steps by brute force during the testing phase" since the RNN can be run till response stops changing.
2. What properties of the images makes the RNNs stop early vs late? For example, in Fig 4 some 19x19 mazes seem to need the timesteps as 25x25 mazes. Have you seen what causes this? Same for the other task. Can this lead to some interpretation/understanding of the RNNs and the solution they are implementing?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: "I am curious to hear why the authors say RNNs used in Bansal et al., (2022) "are not adaptive, human intervention is required to specify the number of recurrent computational steps by brute force during the testing phase" since the RNN can be run till response stops changing"
1. As this is an important issue relevant to several reviews, we have addressed this general issue in our common response. We also tried the Linsley et al 2020 algorithm with a threshold on state change and it performed very poorly on Mazes (see Table 3 in the Rebuttal).
"What properties of the images makes the RNNs stop early vs late? For example, in Fig 4 some 19x19 mazes seem to need the timesteps as 25x25 mazes. Have you seen what causes this? Same for the other task. Can this lead to some interpretation/understanding of the RNNs and the solution they are implementing?"
2, The plot in Fig 4 plots halting step as a function of maze solution path length. There is a clear linear relationship between solution path length (not maze size) for the shorter path lengths. For longer maze path lengths, the required computation saturates.
---
Rebuttal Comment 1.1:
Title: Reply to author rebuttal
Comment: Thank you the responses and the additional experiments.
>"I am curious to hear why the authors say RNNs used in Bansal et al., (2022) "are not adaptive, human intervention is required to specify the number of recurrent computational steps by brute force during the testing phase" since the RNN can be run till response stops changing"
> **Response:** As this is an important issue relevant to several reviews, we have addressed this general issue in our common response. We also tried the Linsley et al 2020 algorithm with a threshold on state change and it performed very poorly on Mazes (see Table 3 in the Rebuttal).
I understand that there are downsides to Bansal et al. (2022) and Linsley et al. (2020) but that still doesn't mean that they are not adaptive. I would encourage updating the manuscript with the downsides and remove the claim that they are not adaptive.
> "What properties of the images makes the RNNs stop early vs late? For example, in Fig 4 some 19x19 mazes seem to need the timesteps as 25x25 mazes. Have you seen what causes this? Same for the other task. Can this lead to some interpretation/understanding of the RNNs and the solution they are implementing?"
> **Response:** The plot in Fig 4 plots halting step as a function of maze solution path length. There is a clear linear relationship between solution path length (not maze size) for the shorter path lengths. For longer maze path lengths, the required computation saturates.
As you note, the RNN seems to have captured a factor beyond path lengths. For me it would be interesting if you used your adaptive timestep method to provide interpretations of an RNN - which I think is novel and beyond what was already done by Bansal et al. (2022) and Linsley et al. (2020). For me, this would push the paper above the threshold for publication. As it stands, given that ACT is a known method and Bansal et al. (2022) & Linsley et al. (2020) have studied adaptive time for Mazes & Pathfinder respectively, I do not see the novelty in simply introducing ACT for Mazes and Pathfinder.
From reading the responses and the discussion, it seems to me that the main contribution is in fact the new RNN architecture (LocRNN) and not the training algorithm. However, it is still unclear to me why LocRNN is better than convGRU, hconvGRU, CORNet-S etc. Like the LocRNN model, hconvGRU and CORNet-S are also neuroscience-inspired. So, what is the critical novel circuitry in LocRNN that enables such great performance? For example, LocRNN has LayerNorm while convGRU, hconvGRU and CORNet-S don't seem to - how critical is this for superior performance of LocRNN? LocRNN has separate inhibitory and excitatory populations (which could explain superiority to vanilla convGRU) but so does hconvGRU - so where exactly does the improvement come from? I think a systematic analyses where the LocRNN's components are lesioned and explicitly contrasted to other neuroscience-inspired architecture is necessary to gauge the contributions of LocRNN architecture.
To summarize :
1. **Limited advancement in adaptive computation compared to previous methods**: Although a previous method was added during rebuttal, I am skeptical of the results since it was done in a very short time-frame and some experiments are missing. Bansal et al. (2022) showed strong generalization for the maze task and Linsley et al. (2020) for the pathfinder task - I am skeptical without more experiments that they won't work for each other. I think more extensive evaluation is needed before this method can be concluded to be better - `all three methods {ACT, Bansal et al. (2022), Linsley et al. (2020)} X all models {LocRNN, convGRU, hconvGRU, CORNet-S} X all tasks {maze, pathfinder}`. As it is currently, we do not know where the improvements are coming from.
2. **Interesting model but unclear where the contribution is**: LocRNN model is interesting and seems to be bringing in performance improvements. But since it is similar to previously known models, it is unclear as to what advancement in the model is causing this improvement.
---
Reply to Comment 1.1.1:
Title: Response to reviewer's response to author rebuttal
Comment: We thank you for your response to our author rebuttal. Please find below our response to your latest comments.
- _“I understand that there are downsides to Bansal et al. (2022) and Linsley et al. (2020) but that still doesn't mean that they are not adaptive. I would encourage updating the manuscript with the downsides and remove the claim that they are not adaptive.”_
We agree with the reviewer, we will update the manuscript removing the claim that they aren’t adaptive and specifically mention the downsides we have highlighted in our rebuttal.
- _"As you note, the RNN seems to have captured a factor beyond path lengths. For me it would be interesting if you used your adaptive timestep method to provide interpretations of an RNN - which I think is novel and beyond what was already done by Bansal et al. (2022) and Linsley et al. (2020)."_
Thank you for the interesting suggestion to explore using adaptive timesteps to provide an interpretation of RNNs.
We agree with you, we believe studying different models’ dynamics at- and around the halting timestep will provide intuition on properties that further differentiate LocRNN from the baselines compared.
We have been actively trying to understand the roles of the two neuron populations in LocRNN, and how they contribute to the empirical results obtained. Fig 7 in the supplementary shows state activations of the L population for a PathFinder example. While the L population lends itself to easy interpretation as performing contour integration, we have inspected the S population, which is harder to understand. We find that they clearly differ from the L activations, suggesting that they’re encoding complementary information, but we are continuing to discover what these interneuron-analogues might be encoding.
- _"So, what is the critical novel circuitry in LocRNN that enables such great performance? For example, LocRNN has LayerNorm while convGRU, hconvGRU and CORNet-S don't seem to - how critical is this for superior performance of LocRNN? LocRNN has separate inhibitory and excitatory populations (which could explain superiority to vanilla convGRU) but so does hconvGRU - so where exactly does the improvement come from?_
Thank you for your thorough review of our architectural differences with baselines we compared against. Please find our detailed response to this question above in our common response where we highlight our hypothesis on why LocRNN works better in comparison to our baselines. While both LocRNN and hConvGRU seem to be using two separate neural populations, they are implemented differently in these two architectures as highlighted in our common response.
- _“LocRNN has LayerNorm while convGRU, hconvGRU and CORNet-S don't seem to - how critical is this for superior performance of LocRNN?”_
* First, normalization is an essential component of many recent high-performing architectures like LocRNN.
* ConvGRU converged on PathFinder and Mazes only when LayerNorm was included in its architecture. For fairness, we only report performance of ConvGRU with LayerNorm in our main submission and rebuttal. We will mention this clearly in our updated manuscript.
* LayerNorm was not added to hConvGRU and CORNet-S as they achieved normalization through batchnorm. hConvGRU and CORNet-S both have batch normalization layer(s) which is crucial for their performance and we don’t change their architecture for consistency with their published results
- _"To summarize :_
_Limited advancement in adaptive computation compared to previous methods: Although a previous method was added during rebuttal, I am skeptical of the results since it was done in a very short time-frame…"_
We understand the concerns on the experiment quality given the short rebuttal timeframe. We are confident in our reported results which used open source implementations provided by the respective authors of hConvGRU, CORNet-S and Bansal, 2022. We shall release the code we used for our experiments on acceptance and include instructions & environment required to reproduce our results.
- _“more extensive evaluation is needed before this method can be concluded to be better - all three methods {ACT, Bansal et al. (2022), Linsley et al. (2020)} X all models {LocRNN, convGRU, hconvGRU, CORNet-S} X all tasks”_
We find the results we have obtained from searching part of the above space to strongly suggest the superiority of LocRNN, we are continually actively exploring the rest of this search space to add further strength to our analysis.
- _“Interesting model but unclear where the contribution is: LocRNN model is interesting and seems to be bringing in performance improvements.”_
We agree that the model is not fully understood yet (like many previously published high-performing models and training techniques like Batch Normalization), but believe that it will be exciting to the NeurIPS community and thus catalyze further exploration. | Summary: The manuscript presents adaptive recurrent networks for processing of static images. The proposed approach augments convolutional recurrent neural networks with the adaptive computation time mechanism, in which the RNN at each step computes an additional halting unit, the value of which is used to determine when to stop the iterative computation. This is shown to yield recurrent vision models that adapt their computational budget to scale with the difficulty level of tasks that require serial grouping operations, such as determining whether two points are located on opposite ends of the same path or segmenting the solution route of a maze. The experimental section also shows that adaptive recurrent networks with gating mechanisms successfully scale to solve the tasks at higher difficulty levels than seen during training.
Strengths: The proposed approach is novel and well-motivated. The experimental results confirm that the proposed models are able to extrapolate the learned knowledge beyond the training distribution to solve more difficult versions of the two tasks. This is quite impressive and I'm not aware of similar studies.
Weaknesses: The write-up is a bit difficult to follow:
1. It seems to me that some important details of the original ACT [1] halting mechanism are omitted in this manuscript or the proposed mechanism is a modified version, which should be clearly stated. I might be mistaken and only the formulation or notation is different. If so, please let me know.
2. Both $p_t$ and $P_{t'}$ are not really probabilities, as I understand it. You could maybe simply call $P_{t'}$ a quantity that, if it reaches a predefined threshold, halts computation.
3. $\phi$ in Equation 4 is not defined
4. It should be stated right after the first occurrence that LN stands for layer normalization.
I think some details for reproducibility are missing or unclear:
5. The sentence in line 188 states that *four* choices are explored, then in 191 it is stated that *three* different recurrent implementations are explored. As far as I understood, the first of the four choices is a non-recurrent ResNet-30. It is not clearly explained how this model is implemented as Equation 2 only covers recurrent architectures.
6. The exact architectures including hyperparameters (e.g. number of kernels $d$ and their sizes) of the recurrent models are not provided.
7. Why is R-ResNet-30 equipped with ACT in Table 1, but not in Table2?
I think the experimental setup should take the following into consideration:
8. For completeness, I think is would make sense to add a few more non-recurrent baselines. It seems intuitive that the considered tasks require multi-step reasoning, but it'd be important to show this empirically, e.g. by using a small vision transformer. I would be surprised if a transformer would completely fail on the task, given the fairly large number of training samples. I can, however, imagine that the proposed architecture performs much better on unseen difficulty levels. Showing this empirically would help in establishing AdRNNs as an important alternatives to current vision models.
9. Accuracies are not reported as mean and standard deviation over multiple runs with different random seeds. This is very problematic, since with some tasks and architectures, single runs may underperform or not converge to any meaningful solution at all, especially with binary success measures (such as exactly correct maze segmentation). Aggregating metrics over several runs (e.g. 5) would help the reader interpret how significant a different in performance really is (e.g. is it more than 1 standard deviation?)
Regarding references:
10. I think the correct citation for *convolutional* GRUs would be [2]
## References
1. Graves, Alex. "Adaptive computation time for recurrent neural networks." arXiv preprint arXiv:1603.08983 (2016).
2. Ballas, Nicolas, et al. "Delving deeper into convolutional networks for learning video representations." arXiv preprint arXiv:1511.06432 (2015).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Overall, I think the proposed method is novel and has potential for impact, but given the issues mentioned in the weaknesses section, some more work is necessary for publication. I will consider to increase the score significantly should the issues be addressed during the author response period.
I have some minor comments regarding the write-up:
Line 63: starts upper-case after "and" in the previous item
Line 100 and 105: Shouldn't it be "4)" and "5)" instead of "2)" and "3)"? Either way, it is a bit confusing.
Equation 2: I'd explicitly state that $\mathbf{r}$ is the recurrent block and how $h_{-1}$ is defined. Also, I think it'd be good to explicitly state what $t$ and $t_\text{train}$ correspond to.
Line 197: "in Supp. Info" should maybe be "in the Appendix" or "in the supplementary material"
Line 266: "While ... whereas" ("while" or "whereas" should be removed)
Line 340-343: I'd avoid nesting parentheses.
## Acknowledgement of rebuttals
I have read the rebuttal and provided follow-up questions. The rebuttal addressed my concerns, except the addition of a strong non-recurrent baseline. I have accordingly increased the score slightly.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 4 excellent
Limitations: The authors state some limitations of the approach and propose directions for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Our work is for the most part similar to the original ACT work, however, a key difference between our work and ACT is that our visual reasoning task involves static inputs whereas Graves (2016) can deal with variable-length sequences. Owing to this difference, our halting mechanism is the same as ACT applied to a 1-token input sequence, we will make this more explicit.
2. Line 188 is a typo in our initial submission, we did perform evaluation only on 3 implementations of recurrent computations (R-ResNet-30, Convolutional GRU, and LocRNN). Based on the newly added baselines we will update this statement to reflect the 5 total RNN cells used (including Linsley et al. (2018) and Linsley et al. (2020)).
3. All architectures were matched in terms of the number of parameters on PathFinder and Mazes evaluations respectively. For PathFinder, fix the number of kernels (d) to be 32 for LocRNN, 21 for ConvGRU and 64 for ResNet-30 with a filter size of 9x9 in the intermediate layers. For Mazes, we fix the number of kernels (d) to be 128 for LocRNN, ConvGRU, ResNet-30 and R-ResNet-30 and the kernel size is fixed to be 5x5.
4. In Table 2, R-ResNet-30 was equipped with ACT and it is a typo that we missed to mention ACT here. We have fixed this typo.
5. It would be very interesting to evaluate how well Vision Transformers perform on the two tasks we evaluate in our submission. Prior work (Tay et al. (2020)) showed that Transformers do not perform well on PathFinder. Given this and computational limitations, we were not able to test Vision Transformers on our versions of these tasks for the rebuttal.
---
Rebuttal Comment 1.1:
Title: Some items addressed, but
Comment: I have read all reviews, author responses, and considered the rebuttal pdf. I acknowledge that the authors addressed some of the issues mentioned in the weaknesses section of my review. However, since the authors only reported results of single runs (see item 9 in my review), it is difficult to judge how significant the differences between models are. Also, as mentioned in item 8, I think the consideration of additional strong non-recurrent baselines would've been important (ViTs were just an example). I will increase the score slightly to a borderline accept.
---
Reply to Comment 1.1.1:
Title: Results presented over multiple random initializations per reviewer request
Comment: Dear Reviewer,
We have reported the performance of both LocRNN and ConvGRU (which are the two main models of comparison in our work) across 3 random initializations. We thank you for raising this issue, we find that (1) **LocRNN is consistently better performing than ConvGRU** and (2) LocRNN’s performance across random seeds is **more reliable with lesser variance when compared to ConvGRU**.
We request you to please find the updated results in our common response posted on August 20 titled "LocRNN contrasted w/ (h)ConvGRU - more discussion and additional random seeds".
We also add the part of our comment which discusses results across multiple random initializations here for your convenience.
| **Model** | **PathFinder-21** | **PathFinder-24** | **Mazes-19** | **Mazes-25** |
|-----------------|-------------------|-------------------|------------------|------------------|
| LocRNN (ACT) | **92.89 +- 0.90** | **85.81 +- 5.57** |**86.83 +- 2.94** | **49.99 +- 4.48** |
| ConvGRU (ACT) | 82.63 +- 4.84 | 74.14 +- 6.52 | 75.1 +- 11.96 | 46.93 +- 4.2 |
We will add these new results to our updated manuscript.
NOTE:
- All numbers above represent mean +- 1 SEM
- Chance performance for PathFinder is 50%, whereas it is close to 0% for Mazes (where our performance metric measures the % of test-set mazes perfectly segmented)
Thank you. | Summary: The primary subject of this paper is fostering computational efficiency in RNNs solving vision tasks by flexibly adapting the number of computing steps depending on the difficulty of the input. This is achieved through the addition of ACT (Adaptive Computation Time introduced by Graves, 2016) to recurrent vision models. The paper evaluates 4 models (3 with recurrence) on 2 visual reasoning tasks popular in the literature for studying RNN behavior (PathFinder and Mazes). One model is newly introduced here and given the name LocRNN. The key result put forward in the paper is that ACT-trained RNNs will indeed adaptively carry out more time steps before halting computation for difficult inputs (longer paths or larger mazes) than for easy inputs. Finally, another finding is that ACT-trained RNNs generalize to more difficult task conditions than those encountered during training.
Strengths: The paper touches upon a significant and topical challenge in deep learning: how to avoid ‘wasting’ computations on easy inputs while still solving the hard ones to achieve computational efficiency. Efforts in that direction are valuable from an ecological perspective, among other perspectives.
Although the work doesn’t address this directly, mapping out which inputs are hard versus easy for a model can help acquire insights into the strategies a model has learned to solve a (visual) task. It is also of relevance for cognitive computational neuroscience and human-modal comparisons.
The paper reads very smoothly. The questions are formulated clearly, and so are the methods. It is overall well written.
Weaknesses: It is good to see the prediction confirmed. Still, the finding that ACT will flexibly halt computation early for easier inputs does not tremendously advance the field, considering it’s what Graves (2016) already reported. With an ACT-like method (“DACT”) Eyzaguirre and Soto (2020) have already found this in visual reasoning tasks too. It’s worth a citation.
Similarly, there have been other thorough demonstrations that vision RNNs can generalize to more difficult task conditions, even on the very same Pathfinder task. I encourage the authors to check out and cite Linsley et al. (2020) "Stable and expressive recurrent vision models". In particular, Fig. 3 in this current paper is explored in detail and analyzed in the Linsley (2020) article.
The introduction of LocRNN then becomes the most novel aspect of the work, but there is not a lot of in-depth discussion in the main text about how it differs from prior models informed by neuroscience. At the very least, one would require a thorough comparison between the LocRNN and the model presented in Linsley et al. (2018) regarding their performance metrics and architectural similarities.
Finally, can the authors comment on whether this approach would work in other scenarios of visual difficulty apart from simply spatial extent? The notion of visual complexity involves a host of other factors which a non-hierarchical model would fail to account for, and which needs discussion.
Typos and minor fixes:
Some of the citations appear to be in the wrong format (e.g., L40)
L. 322: Fig.4 → Fig. 3?
Fig. 3: in a revised version, probably good to discuss these results in the main text more, specifically with regards to the non-ACT controls.
L 328: Fig. 4 has no panel c.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to the weaknesses section above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Please refer to the weaknesses section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: "It is good to see the prediction confirmed. Still, the finding that ACT will flexibly halt computation early for easier inputs does not tremendously advance the field, considering it’s what Graves (2016) already reported. With an ACT-like method (“DACT”) Eyzaguirre and Soto (2020) have already found this in visual reasoning tasks too. It’s worth a citation."
1. While we agree with the reviewer that Graves (2016) has shown that ACT will flexibly halt computation early for easier inputs, we would like to highlight that our unique contribution here is showing that it can be combined effectively with convolutional RNNs for improving performance on visual reasoning tasks. We thank the author for the relevant work by Eyzaguirre and Soto (2020). We shall add a citation to this work, but a key difference between this work and our submission here is that our models and evaluation are centered around extrapolation to novel difficulties on challenging visual reasoning problems. While we refer to [Kim et al 2018, Not-so-CLEVR] in support of the task used in Eyzaguirre and Soto (2020) as one known to not be challenging, it is certain from their experiments that they do not explore the scenario of generalizing to novel harder difficulty levels in their paper.
"The introduction of LocRNN then becomes the most novel aspect of the work, but there is not a lot of in-depth discussion in the main text about how it differs from prior models informed by neuroscience. At the very least, one would require a thorough comparison between the LocRNN and the model presented in Linsley et al. (2018) regarding their performance metrics and architectural similarities."
2. Regarding comparison to Linsley et al. (2018) and Linsley et al. (2020), please refer to our comment addressing all reviewers where we discuss the difference between these works and our proposed work, as well as demonstrate our (empirical) high performance compared to models in both the above works.
"Finally, can the authors comment on whether this approach would work in other scenarios of visual difficulty apart from simply spatial extent? The notion of visual complexity involves a host of other factors which a non-hierarchical model would fail to account for, and which needs discussion."
3. We also find it very interesting to test whether our proposed approach would scale to other scenarios of visual difficulty apart from spatial extent. Our work in progress and future scope will pursue this direction by studying AdRNNs on computer vision benchmarks based on natural images.
4. Thank you for highlighting typos in the paper, we shall promptly fix these issues in the submission.
---
Rebuttal Comment 1.1:
Title: Author discussion period is closing soon
Comment: Dear Reviewer,
We would like to check if you have any concerns following our rebuttal and follow-ups that we may address during the author discussion period which ends on Aug 21st at 1 pm EDT.
Thank you. | Rebuttal 1:
Rebuttal: Thank you all for taking the time to carefully review our paper. Here, we address common points based on overlapping comments in the reviews. We provide our responses to the constructive comments common to all reviewers in the following sections of our response:
### Differences with respect to Linsley et. al, 2020, Linsley et. al, 2018
CC: ikdz, cX6r
We thank the reviewers for bringing up this very relevant related work that we have by mistake not cited. We will certainly add a citation to Linsley et al, 2020, we agree that it is indeed relevant to our submission. We went one step further in including this model (hConvGRU trained with C-RBP) as well as the one in (Linsley et al, 2018) as part of our baselines using code made available by the authors (please find the updated results in Table 3 in the rebuttal). We find strong evidence in support of our model, despite tuning the above-mentioned models considerably, it seems very difficult to make them converge to a stable solution that better generalizes out of distribution on the tasks we used (PathFinder and Mazes) in comparison to LocRNN.
Differences between our papers:
1) @Reviewers ikdz, cX6r: It is important to note that Linsley et al, 2020 uses a simplified version of PathFinder (the stimuli used look similar but the tasks have different rules) – The 2020 task involves densely supervised segmentation of the path with a circle on one of its ends whereas our task involves binary supervision of whether two circles are on the same path (which is the original Linsley et al 2018 task). The main difference is the information content of the supervision - the 2020 task provides pixel-by-pixel labeling, which is very informative and simple to learn as it enables learning to trace paths pixel-by-pixel; our task (following their prior work from 2018) provides only binary (sparse) classification information which is harder to learn.
Hence, the OOD generalization shown by Linsley et al, 2020 and what we show here (and used in Linsley et al, 2018) are different and not numerically comparable.
2) Linsley et al., 2018 & Linsley et al., 2020 use the hConvGRU RNN (in 2018 they train it with BPTT whereas in 2020, they train it with C-RBP). We add both these models to our evaluation baselines and in our experiments they don’t match the performance of LocRNN (see last three rows of the updated Table 3 in the Rebuttal). Despite increasing the number of recurrent iterations and using network output stability as the stopping criterion (as reviewer cX6r suggested), we find the network to not work well on Mazes 19x19 or Mazes 25x25 (both ConvGRU and LocRNN fare reasonably well on both these challenges). Considering that the same hConvGRU architecture learns these tasks while using BPTT supports the hypothesis that (a) hConvGRU+C-RBP is quite complex a model with many free parameters and hyperparameters making the network difficult to train (and also interpret), or (b) it is sensitive to particular random initializations where it works well.
3) Overall, with the above experiments we find clear and striking evidence of the novelty and improved performance of LocRNN with respect to hConvGRU (Linsley et al, 2018), and hConvGRU + C-RBP (Linsley et al, 2020).
### Using stability as a halting criterion as opposed to learning halting
We considered using network stability as a criterion for halting.
We found this to work suboptimally for a number of reasons.
1) Avoiding hand-engineered heuristics: Most importantly, it requires defining a hand-engineered heuristic on how much change in the output is considered small enough to halt. In typical segmentation tasks, the network output for the initial few timesteps is highly stable in making nonsensical predictions (predicting all pixels as the negative class) and thus, heuristics need to identify an inflection point in the output trajectory where meaningful predictions start to emerge and stabilize. In the absence of ground truth information, one cannot pick a heuristic that generalizes to unseen data.
2) Avoiding hard assumptions / constraints on stable hidden states: Learnable halting makes less assumptions about the hidden state’s properties, and hence doesn’t enforce hard constraints such as stability to be satisfied by training. Some, but not all RNNs, have stable hidden states wherein the network response stops changing after reaching an attractor. In fact, Linsley et al., 2020 that reviewer cX6r refers to argue that RNNs that are expressive have an intrinsic inability to learn stable hidden states.
For RNNs to be stable, their hidden state transformation needs to model a contractive mapping (Miller & Hardt, 2019; Pascanu et al, 2013). That is, the recurrent transition function F satisfies the following inequality:
||F(h_t) - F(h_{t-1})||_p < \lambda ||h_t - h_t-1||_p
RNNs with stable hidden states that satisfy the above inequality are quite difficult to train on challenging problems (these kinds of networks are similar in nature to Deep Equilibrium Models [1] and Neural ODEs [2] which are not as successful yet as RNNs in their ability to solve a wide variety of tasks) as there is an inherent tradeoff between stability of the hidden state and expressivity (Linsley et al., 2020). Even when stable models perform comparably to unstable models as in Miller & Hardt, 2019, the authors show unstable models’ advantages such as performance improvements in the short-time horizon and lesser vanishing gradient issues.
3) Finally models operating on dynamic input (as in the real world) may not be expected (or desired) to reach a stable hidden state.
4) We also tried the Linsley et al 2020 algorithm with a threshold on state change and it performed very poorly on Mazes (see Table 3 in the Rebuttal).
Pdf: /pdf/591d74225b6219717912c637aabfffa354b2d694.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: Authors show that using recurrent networks for adaptively processing static inputs for a variable number of iterations allows for zero-shot generalization to more difficult problems by simply unrolling the model predictions for more time steps at inference. Authors propose LocRNN and show model performance on the pathfinder and maze datasets, highlighting that their model uses a greater number of iterations for harder problems and can generalize to harder problems than those seen during training.
Strengths: Clarity of Writing. This was by far my favorite paper to read of my reviewing batch because of the clarity of writing. I was able to clearly understand the premise and motivation and follow along with the experiment setup and evaluation. The insight that recurrent networks can be used to scale up or down the amount of compute depending on the difficulty of the problem is well articulated.
Compelling Results. Authors show that LocRNN beats competitive baselines on two standardized benchmarks. Impressively, Table 2 shows that LocRNN beats other baselines on extrapolation datasets by a wide margin.
Weaknesses: Motivation. Although I agree with the premise of the paper, I want to push back on using the biological inspiration of variable compute for neural networks. Although humans naturally can scale to harder and easier problems, state-of-the-art vision networks do not do this. Although authors cite efficiency as a reason for favoring RNNs for variable compute problems, no experimental evidence is provided.
Limited Baselines. Similarly, although the focus of this paper is on adaptive compute, two strong baselines would be LocRNN trained with only one computational step and LocRNN trained with N computational steps (where N is some fixed number of maximum steps allowed). Given the significant engineering that went into the network architecture of LocRNN, the ResNet-30 baseline does not seem convincing to show the necessity of adaptive compute.
Datasets. Although authors evaluate on two standard benchmarks, I would argue that these toy datasets do not represent real tasks. It would be more convincing to compete LocRNN against a state-of-the-art model on a more traditional computer vision benchmark (e.g. ImageNet, COCO, etc.) One place I think this approach could shine is in object detection. Given an image with a single near-field object, we should not need to spend a lot of compute. Similarly, given an image with many far-field small objects, we should likely spend more compute.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Intermediate Predictions. How do we know that the iteration that the model stops at is optimal? Can we evaluate the model prediction at intermediate steps and evaluate the model prediction beyond when it naturally stops?
Architecture Diagram (Sec. 4.3). Providing an architecture diagram for LocRNN would help improve the clarity of Section 4.3. Please add this to the appendix.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Authors highlight that their work explore this problem of adaptive compute on static images and will work on videos in future work. However, I would encourage authors to explore more complex problems within static image domains (e.g. classification, detection, segmentation of natural images instead of toy-like environments).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Our experiments show that RNNs which scale computation are the only networks which are able to generalize to more challenging test samples by using more recurrent iterations at inference. We hypothesize that the vast levels of compute employed by SOTA networks today is thus driven by the long tail of the most difficult examples encountered during training and may be entirely unnecessary for the majority of the distribution. In a KDD23 plenary talk (8/9/23) , Eric Horvitz (CSO Microsoft) showed a plot of accuracy vs compute and showed that the last 1% accuracy of GPT-4 required 80% of the computational cost (through longer prompts). Given the very real negative impact these massive computational systems have on the environment, there is strong motivation to reduce computation when possible. We have shown experimentally that with LocRNN and convGRU we can both reduce computation and increase accuracy because we’re not constrained to a one-size-fits-all level of compute like SOTA architectures.
2. While we agree with the reviewer that the end-goal is to inspect the performance of these systems in a naturalistic task, we intentionally conducted this study using tasks in which difficulty could be experimentally manipulated to study the generalization of different architectures in complexities beyond training. This is a lot more difficult in a dataset like ImageNet, in which organizing the examples by complexity is a non-trivial endeavor. We intend to pursue this direction in future work, but believe the chosen task represents an important middle ground between the simplistic tasks in prior literature and the end-goal of naturalistic images.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns in the rebuttal (adding other non-recurrent baselines and visualization of the LocRNN architecture) and adding additional context for your results. The exposition presented above strengthens the motivation for your experiments. I think this is a well written paper that should be recommended for acceptance. | null | null | null | null | null | null |
Generalized Belief Transport | Accept (poster) | Summary: This paper describes a unifying view at various learning settings in machine learning, such as Bayesian inference, optimal communication, supervised classification and frequentist inference. All three learning settings are shown as special cases of an objective function, which can be interpreted from the lens of optimal transport.
Strengths: The proposed objective function generalises such learning concepts in a general equation. This generalisation sheds new light on the various methods.
Weaknesses: The current paper generalises the learning paradigms, but what new insight can be gained from this? Does this generalisation give rise to a new learning method or a more efficient version of an existing model that would solve an existing problem?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Does this work relate to Generalised Belief Propagation [1][2]? Or fit in this [3] hierarchy of inference methods?
[1] Yedidia, Jonathan S., William Freeman, and Yair Weiss. "Generalized belief propagation." NeurIPS 2000
[2] Teh, Yee, and Max Welling. "The unified propagation and scaling algorithm." NeurIPS 2001
[3] Rosen-Zvi, Michal, Michael I. Jordan, and Alan Yuille. "The dlr hierarchy of approximate inference." arXiv (2012).
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your generous comments and suggestions.
We are grateful that you find our proposed framework generalises learning concepts in a general form.
* Regarding your comment on 'what new insight can be gained from this': thank you for the guiding questions on how we shall emphasize implications of GBT.
We have organized our thoughts in the general response, please see details there.
On the high level, our generalisation give rise to new learning model (models lie in the interior of the cube) that provide capabilities beyond existing models, which worth to be explored, and lay out a platform where interpolations between these models can be properly investigated.
* Question: 'Does this work relate to Generalised Belief Propagation'?
The provided citations are from a different perspective. Our emphasis is not on approximate inference (though there is inference, i.e. learning algorithms), but rather on formalization of the problems themselves. | Summary: This work aims to unify learning approaches by specifying a 3D space where the dimensions represent modalities of learning that can be combined to specify various learning approaches, e.g., Bayesian and Frequentist approaches. The 3D space is defined by three learning constraints within Unbalanced Optimal transport (UOT).
Strengths: This work's conceptual aims are quite interesting and potentially illuminating and instructive.
Weaknesses: Many details are glossed over in the proofs, making them hard to read and understand. This work is limited to finite datasets and finite hypotheses, which significantly limits the generality of the work. Minimally, the hypothesis class' finiteness limits this unification's informativeness.
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: Does this unification hold with infinite hypotheses or datasets? How might one use this unification to design new algorithms?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 2 fair
Limitations: This work is not presented in a digestible manner; the impact of this work is not effectively communicated or demonstrated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your generous comments and suggestions.
* We are surprised, given the other reviewers' assessments, and disappointed that you find our paper hard to read.
It would be helpful if you provided specifics regarding what details were glossed over. Without that information it is not possible to respond to your critique or improve the paper.
* We respectfully disagree with your assessment on how finiteness limits our contributions.
Rather than being a limitation, we would like to point out that our approach in fact allows generalization to continuous setting.
GBT captures existing learning models as unbalanced optimal transport (UOT) problems.
Theory and algorithms for continuous UOT were developed in [Chizat et al. 2018],
which provide necessary machinery to go for continuity. We echo the reviewer that a good learner should not be limited by a pre-selected 'hypothesis class' (finite or infinite), which is a drawback of many learning model now. As detailed in general response (1a), Proposition 12 demonstrated that there are GBT learners who are able to learn hypothesis which was not in their initial hypothesis set. This potentially provides a more fundamental approach to overcome such limitation.
On the other hand, for practice and implementation purposes, most if not all machine learning models are done for finite samples.
We strongly disagree that finiteness (or not) limits the merit of this submission.
* Towards question 'How might one use this unification to design new algorithms?', because the entire space of learning models is parameterized (can be viewed as a cube in Fig.1), new learning models can be explored naturally by varying the parameters.
Existing models are mainly located on vertices of the cube, in this paper we explored two new types of learner in section 3 and section 4, please see details in the general response. For algorithm, as each model on the cube is an UOT problem, hence can be solved by any algorithm developed for UOT. The most popular approach is Sinkhorn iteration, which we included as Algorithm 1 on line 88.
---
Rebuttal Comment 1.1:
Title: Follow up to rebuttal
Comment: > We are surprised, given the other reviewers' assessments, and disappointed that you find our paper hard to read.
Presumably, the authors would like this work to have an impact on the general machine learning audience who may use this unification to understand better/develop learning algorithms. I'd like to point out that reviewer NpBr noted the following:
"The paper assumes a very high level of familiarity with concepts such as the Frequentist-Bayesian dichotomy and generative vs. discriminative approaches. Readers without extensive prior knowledge in these areas may find it challenging to fully grasp the nuances of this work. Including additional explanations or references to relevant background material would enhance the accessibility of the paper."
I believe this is a significant limitation of the presentation.
>We respectfully disagree with your assessment on how finiteness limits our contributions.
I'd like to point out that the finiteness in the discussion here is not about samples, which is indeed the setting one observes in practice; it is about the finiteness of data/hypotheses that fundamentally defines the problem. The main contribution of this work is a theoretical unification of learning algorithms, so the assumption of the requirement of finiteness in both hypothesis and datasets seems very strong and unreasonable. It is stated with little discussion, even conceptually, about why it is reasonable.
After reading your responses, other reviews, and revisiting the paper, I am happy to raise my score from 2 to 3, given that within the context of the limitations that I have pointed out, the work is sound (also increasing my score on the soundness).
---
Reply to Comment 1.1.1:
Title: Clarification
Comment: Thanks so much for following up!
Clarifications of your two points:
1) Is the "high level of familiarity with concepts such as the Frequentist-Bayesian dichotomy and generative vs. discriminative" the only challenge you find with the paper? If so, that is a simple modification that would we be very happy to make. We are struggling to understand on what basis the paper deserves a "3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations." given the issue is one that can be easily resolved by providing some more background. Indeed, reviewer NpBr rated the paper quite a bit higher despite raising this point.
2) To clarify, our response was about the data and hypothesis spaces. We used the phrase "finite samples" in the response which is perhaps where the confusion arose, by which we meant approximating a continuous distribution by finite samples. We don't see this as a limitation, given that most models work in finite approximations anyway. It is worth noting that there is very strong theory in OT showing convergence of the discrete to continuous when the continuous is approximated (discretized) by samples (e.g. Aude et al., 2016). This is in addition to the strong pure mathematical foundation we noted in our previous response. These results provide strong connections of the discrete case to the continuous case, and avoid some pretty heavy mathematics. We think this is a reasonable compromise, given that the discrete case is what is likely to be used, practically. We are happy to be clearer about this choice in the paper, if that is helpful.
Thanks again for engaging and we are hopeful that you will consider raising your score!
Aude, G., Cuturi, M., Peyre, G., and Bach, F. Stochastic optimization for large-scale optimal transport. arXiv preprint arXiv:1605.08527, 2016. | Summary: Standard models of machine learning treat different internal constraints (e.g., prior knowledge) and external constraints (e.g., time availability, environmental non-stationarity) as separate problems, and thus hinder the development of unified learning agents. This paper proposes a framework called Generalized Belief Transport (GBT), which builds upon Unbalanced Optimal Transport, to unify existing learning models. GBT offers a parameterization that allows for interpolation between different modes such as Bayesian inference, Frequentist inference, cooperative learning, and discriminative learning. The authors provide theoretical analyses and empirical investigations to support their claims.
In short, the paper makes the following contributions:
- It proposes a parameterization that unifies existing learning approaches and shows its continuity and differentiability.
- It analyzes the behavior of GBT under variations in the parameter space.
- It studies sequential GPT for both static and non-static settings.
Strengths: **Originality & Significance**
- The paper presents an interesting point of view that unifies various learning models with different internal and external constraints. The proposed GBT framework allows flexible combinations of $(\epsilon_P, \epsilon_\eta, \epsilon_\theta)$ and helps the development of unified learning agents.
**Quality & Clarity**
- The paper is well-structured and provides a clear explanation of the challenges and the proposed framework. The theoretical analyses conducted to establish continuity, differentiability, and behavior under parameter variations help improve the understanding of the framework. The empirical investigations on sequential learning and predictive performance under environmental drift also demonstrate the practical applicability of GBT. The authors provide the proofs for the key theorems and the code for reproducing experiments in the supplementary material.
Weaknesses: **Empirical significance**
- It is very interesting to unify seemingly different learning models like the Frequentist and the Bayesian. However, the paper touches vaguely on the practical implications of such a unified framework. For instance, are there trade-offs for taking different points in the parameter space? How does the proposed framework extrapolate? Does it lead to new learning methods that are previously under-explored? What are some practical future steps for improving/utilizing the proposed framework?
**Presentation**
- The paper assumes a very high level of familiarity with concepts such as the Frequentist-Bayesian dichotomy and generative vs. discriminative approaches. Readers without extensive prior knowledge in these areas may find it challenging to fully grasp the nuances of this work. Including additional explanations or references to relevant background material would enhance the accessibility of the paper.
**Other details**
- Section 1: missing definition of $\eta$ (estimation of the data distribution) in the first 4 paragraphs.
- The arrangement of the color bars in Figure 2 (left) can be more organized. Currently it’s hard to read and distinguish the setting names.
- Color legend is missing for Figure 3.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - How is the area for the GBT learner’s posteriors measured for Figure 6 (the curves do not seem to be closed)?
- Are there any existing efforts to combine discriminative learning methods with Bayesian approaches (class priors) to tackle a learning problem? How does the proposed framework handle such integration?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Given the lack of previous work attempting to unify and analyze the general problem of learning under constraints, what are the potential research directions and open questions in this area? How can the field benefit from a unified framework for learning and inference?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive suggestions and insightful comments.
We are excited that you find our proposed 'framework allows flexible combinations of and helps the development of unified
learning agent'. Please see our answers to your questions below.
* Empirical significance and limitations: thank you so much for the insightful comment on how we shall improve our presentation on these points. We have organized our thoughts in the general response, please see details there.
* Presentation: as suggested, we will add a preliminary section on existing learning models prior introduce GBT framework (around line 63, prior section 1.1). Thank you for the detail comments, they will also be addressed in revision.
* Question: 'How is the area for the GBT learner’s posteriors measured for Figure 6 (the curves do not seem to be closed)?'
* Answer: You are right, the entire curve is not closed. To calculate the area, we plotted the mean posterior data from round 1 to round 300 on $\mathcal{P}(\mathcal{D})$ (the unilateral triangle),
and observed that these points become periodic of size 20 (the period of the true hypothesis) as data increase. Thus we divided the mean posterior data from round 1 to round 300 into 15 full periods of length 20. For each period, consider the polygon spanned by the 20 points, its shape stabilized after initial periods. Hence, we used average areas of last a few periods as the area of GBT learner’s posteriors.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My concerns are addressed. I'm happy to keep my rating.
---
Reply to Comment 1.1.1:
Title: Follow up
Comment: Thank you! | Summary: The authors introduce the concept of Generalized Belief Transport to unify and parameterize 3 different axes of learning from within the formalism of Unbalanced Optimal Transport. Corresponding limits points on the "cube" of these 3 axes recapitulate many common learning paradigms from the literature (e.g., bayesian inference, frequentist inference, cooperative communication, etc.). The authors further demonstrate an algorithm for solving some synthetic instances of these various problems, and demonstrate various tradeoffs as one moves from one type of learning to another.
Strengths: Originality: The connections between these formalisms through the lens of UoT (and the proofs) are certainly novel.
Quality: The overall quality of presentation is high, and I particularly like the "cube" device for visualizing the different limit points.
Clarity: The presentation is incredibly straightforward, and the proofs are quite clear.
Significance: This is less clear, but a more general formalism that can automatically facilitate many different, familiar forms of learning seems instrumentally valuable for further methods development.
Weaknesses: How useful this formalism will _actually_ be for practitioners is my primary point of uncertainty. Typically, any given practitioner doesn't really have any doubt about where they are on the "cube" for their learning problem, and thus they throw the most powerful technique available to them at that particular point of learning-agent-space. This work might be better served by a less synthetic example, i.e., an example problem that _requires_ moving between points in learning-agent-space, to which this formalism would actually be uniquely well suited. But a good (better yet, a *convincing*) example of such a thing is not obvious to me. tldr: how do I actually use this formalism, if I know exactly where I am in learning agent space, and why wouldn't I use one of the more well-known tools?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your generous comments and suggestions.
We are grateful that you find our utilization of UOT is novel and our presentation is clear and straightforward.
Most importantly, your request for a *convincing* example hit home with us. The proposed addition to the introduction is, we believe, convincing. We would very much appreciate your thoughts on that example, and hope you agree!
Regarding your concerns on `how useful this formalism will actually be for practitioners',
please see detailed clarification in the general response.
On the high level, we believe (1) models lie in the interior of the cube provide capabilities beyond existing models, which worth systematic exploration.
(2) more importantly, rather than create a new model for the modeler, GBT is developed from agents point of view (thank you for pointing this out, we will add a clarification in revision).
In practice, agents do not know the situation well enough to identify an optimal model in the cube at the beginning of a mission.
They need to find their way towards the optimal model as data come in. The cube represents a space of possible ways agent may learn about the world. Such interpolation can be done by gradually changing the agent's $\epsilon$ as shown on Page11 Fig 1 of supplementary material. Algorithms that effectively (and ideally efficiently) do this inference based on observed data is an important direction for future work.
---
Rebuttal Comment 1.1:
Comment: I thank the author's for their response, and have raised by score to a 6 in response.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you! Please let us know if there are any further questions or clarifications we can provide! We would be happy to provide further details. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their generous comments and suggestions.
Here we clarify the practical implications of proposed framework.
Specific comments are addressed for each reviewer separately.
We are encouraged that all reviewers agree that Generalized Belief Transport (GBT) establishes a uniform mathematical foundation for a broad class of learning models, upon which, basic questions in learning can be answered rigorously.
* We agree with Reviewer 7Pt6 "This work might be better
served by a less synthetic example, i.e., an example problem that requires moving
between points in learning-agent-space, to which this formalism would actually be uniquely well suited".
Here is a concrete example:
``Suppose a learner observing an *agent* behaving in an environment. As an observer, one may wish to learn about the environment from the agent's actions. However, any inferences one draws depend on beliefs about the agent. How is the agent updating their beliefs? Do they have stable goals, or are they changing over time? Perhaps the agent is selecting actions to communicate what they know? In order to draw inferences over these possibilities, one must parameterize the space, ideally in such a way one could optimize over the possibilities. Indeed, with such a framework, one would be able to naturally interpolate between classic dichotomies such as Bayesian and frequentist, static and dynamic environments, and helpful versus neutral agents. We propose such a framework. ''
We will insert this text in the introduction between the second and third paragraphs, to help the reader understand the need for our generalized framework.
* Towards answering "what are the potential research directions and open questions in this area?
How can the field benefit from a unified framework for learning and inference? what
new insight can be gained from this?" (Reviewer NpBr and Reviewer mBxm),
here are our thoughts:
(1) Because the entire space of learning models is now parameterized (can be viewed as a cube in Fig.1),
new learning models can be explored naturally,
or even optimized with respect to particular tasks in the feature.
Existing models are mainly located on vertices of the cube, in this paper we take a first probe into the cube with two cases below.
Full exploration of the cube and an effective optimization routine are left to future work, to avoid over complicating an already technically detailed paper.
(a) In section 3, we proved that there are learners in the cube can learn a new hypothesis naturally.
A drawback of Bayesian inference is that only hypotheses in the original hypothesis set can be learned.
Proposition 12 showed that GBT learners with $\epsilon_{\eta} = \epsilon_{\theta} = \infty, \epsilon_P\in (0,\infty)$ are able to learn *any* hypothesis, which begins to approximate the flexibility of human learning.
(b) In section 4, we demonstrated learners in the cube who are capable of learning in dynamic environments (Fig.5 and Fig.6).
To be more concrete (as suggested by Reviewer 7Pt6), consider the situation where a learner observes data from a ground truth that is dynamic.
For example the weather gradually changes, the climate slowly drifts over time, learners learn with experience, etc.
Figure 5 shows when the ground truth travel along a triangular path (Fig.5a),
Bayesian learner converges to a fixed hypothesis on a vertex (Fig.5b) whereas GBT learner with parameters $(1, 10, 10)$ was able to detect there is a cyclic pattern (Fig.5c).
(2) The differentiability of GBT paves the path for online interpolation between learning models.
Here are several cases where movement in the space yields interesting, novel theory:
(a) It is popular in the state of art machine learning models that an agent learns probabilistically,
but makes decisions greedily.
This heuristic represents a path where a big leap on the cube was taken at the last step.
An interesting question is under what circumstances this is optimal, what are the trade-offs, and under what conditions smoother trajectories are preferable.
(b) When we communicate with others, human learners can move from Bayesian inference to cooperative communication gradually,
which involves recognizing that the people select data purposefully, rather than sampling at random, conditional on the hypothesis they wish to convey.
Such smooth interpolations can be achieved by modifying a learner's location on the cube as demonstrated in Section 3.1 (page 11 Fig.1) in the supplement materiel.
As Reviewer 7Pt6 pointed out most of existing research focus on the case where "practitioner doesn't really have any doubt about where they are on the cube".
However, in reality, both environment and other agents are dynamically changing,
we believe GBT can facilitate research in the direction of building a learner who is able to adopt appropriate learning models based on incoming data,
rather than learning in a fixed model. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a framework called Generalized Belief Transport that unifies different types of machine learning models. The authors have proven a number of properties about their proposed framework.
Strengths: * The problem proposed seems relevant given the different number of approaches to machine learning. If successful the proposed framework can greatly unify prior work.
Weaknesses: * I do not have any prior knowledge about the problem studied in this paper at all, so I found the paper to be difficult to understand. That might not necessarily be the fault of the authors, but I will defer to the experts in this area to comment.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, please see further clarification on the practical implications in the general response. We are hopeful that you will appreciate the importance of this work! | null | null | null | null | null | null |
History Filtering in Imperfect Information Games: Algorithms and Complexity | Accept (poster) | Summary: The submission considers the problem of approximating public belief states. The submission's main contributions are:
- Showing that certain fundamental computational problems related to public belief states are generally FNP-complete.
- Defining a subclass of games and showing that enumerating public belief states for these games can be done in polynomial time.
- Introducing an MCMC algorithm for the computation of public belief states in trick-taking card games.
Strengths: I think that studying scalable mechanisms for approximating public belief states is an important direction, so I am partial to the direction of the submission. Each of (what I view as) the submission's main contributions is novel. Both Theorem 1 and Theorem 2 are nice results. The MCMC algorithm is also a good contribution.
---
I am disclosing here that I did not read the proofs of the theorems or check the details of the experimental setup.
Weaknesses: My criticisms regarding the submission mostly hinge on presentation.
I think the submission does a poor job with respect to existing work in two respects.
**First, it does not motivate the importance of public belief states as well as it could.** The only motivating use cases provided in the introduction are DeepStack and Pluribus. There are so many more great works and lines of research that rely on public belief states:
> Work in control literature: Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach (Nayyar et al., 2013)
The paper that introduced public belief states! There is a large body of work in decentralized stochastic control literature using public belief states derived from Nayyar et al.'s work.
> Work in Dec-POMDP literature: Optimally solving Dec-POMDPs as continuous-state MDPs (Dibangoye et al., 2013); Sufficient Plan-Time Statistics for Decentralized POMDPs (Oliehoek 2013)
Independently discovered public belief state-like objects for Dec-POMDPs. There are large bodies of on how Dec-POMDPs can be solved using HSVI from this perspective.
> Search algorithms for common-payoff games: Improving Policies via Search in Cooperative Partially Observable Games (Lerer et al., 2020); Scalable Online Planning via Reinforcement Learning Fine-Tuning (Fickinger et al. 2021); A Fine-Tuning Approach to Belief State Modeling (Sokota et al., 2022)
The best performing Hanabi AIs rely on search techniques that use public belief states!
> Other poker AIs: Superhuman AI for heads-up no-limit poker: Libratus beats top professionals (Brown and Sandholm, 2017); Combining Deep Reinforcement Learning and Search for Imperfect-Information Games (Brown et al., 2020); Player of Games (Schmid et al., 2021)
I'm sure the authors are aware of these. But it's still valuable to list them to emphasize to the reader the ubiquity of the role that public belief states have played in successful poker AI.
> Other techniques for zero-sum games: HSVI can solve zero-sum Partially Observable Stochastic Games (Delage et al., 2022)
Abstracting Imperfect Information Away from Two-Player Zero-Sum Games (Sokota et al., 2023)
These works give new techniques for solving and doing search that don't require gadget games.
> Work for adversarial team games: A Marriage between Adversarial Team Games and 2-player Games: Enabling Abstractions, No-regret Learning, and Subgame Solving (Carminati et al., 2022); Team belief dag form: A concise representation for team-correlated game-theoretic decision making (Zhang et al., 2022)
These works show how adversarial team games can be solved using public belief states.
---
It's to the benefit of everyone for the submission to motivate itself more completely:
- Readers without extensive knowledge of literature will be left with a greater sense of the importance of public belief states. As is, the submission risks them being left with the impression that they have only been studied by a niche community mostly focused on poker.
- Authors of these works who normally interact with mostly or entirely disjoint bodies of literature from that of the imperfect information game community (such as those who work on Dec-POMDPs or decentralized stochastic control) are more likely to be made aware of the submission and in turn promote it among their own communities.
- The submission is made stronger because it is made clear that it impacts a much broader body of literature than the current version reads as impacting.
**Second, the submission appears to be entirely unaware of the single most related work.** The submission claims that applications of "theoretically-sound depth-limited search in imperfect information games ... have been limited to games in which the relevant information is
small enough to be enumerated." **This is not true!** The main contribution of *A Fine-Tuning Approach to Belief State Modeling (Sokota et al., 2022)* introduces a technique (called belief fine-tuning) for approximating public belief states in settings where the relevant information cannot be enumerated. They show that belief fine-tuning facilitates strong performance in such settings (specifically, variants of Hanabi).
I encourage the authors to compare/contrast it to their own work in a related work section. It would also be interesting to see a direct comparison with the TTCG Gibbs sampler in Oh Hell, but it would be a lot of work for a rebuttal period, so it is not a necessity.
### re: theory
I think the way that the theoretical results are summarized could be improved. In section 3.2, the submission states that the subsequent section examines problem variants 1-5. However, at a surface level read it is difficult to discern whether this is the case. The subsequent section states that the construction problem is FNP (and therefore more complex computations are, also). But what about existence? I think explicitly referring back to problem variants by number, where possible, would help to improve clarity.
### re: experiments
I would be careful in extrapolating too much from the results of the experiments. The policy class that the submission examines is not resemblant of policies that humans use or that RL models learn. Furthermore, value error does not necessarily translate well to search performance. (This is not to say that the experiments are not appreciated, just that the submission should caution the reader of these limitations.)
### Specific comments I made while I was reading (some redundancy with above)
> Search in imperfect information games requires subgame decomposition—a process in which a subgame is computed from public information and player beliefs.
This isn't true. It's perfectly possible to perform search without subgame decomposition (for example, IS-MCTS). What you really ought to argue here is that the approaches that have been most successful/have the strongest guarantees require subgame decomposition.
> Applications have therefore been limited to games in which the relevant information is small enough to be enumerated. Scaling to larger, more complicated imperfect information games requires a deeper theoretical understanding of history filtering.
It is not true that applications have been limited to games in which relevant information is small enough to be enumerated. See: *A Fine-Tuning Approach to Belief State Modeling* (ICLR 2022). Thus, it is also untrue that scaling to such games requires a deeper theoretical understanding of history filtering than existed prior to the submission.
> Of these, generation and enumeration are clearly relevant to history filtering. Prior work (Schmid et al. [2021]; Brown et al. [2020]; Moravcík ˇ et al. [2017]; Brown and Sandholm [2019]) has generally relied on enumerative methods, i.e. filtering histories by explicitly representing the entire PBS. Generative methods for history filtering potentially have the advantage of avoiding explicit PBS representation.
Generative approaches in combination with public belief states in both of the works below:
*Scalable Online Planning via Reinforcement Learning Fine-Tuning* (NeurIPS 2021)
*A Fine-Tuning Approach to Belief State Modeling* (ICLR 2022)
The reason these works avoided explicit PBS representation was actually orthogonal to PBS representations. For the NeurIPS 2021 paper, it was because of the high computational expense of doing search for every information state supported by the PBS. For the ICLR 2022 paper, it was to scale to settings in which the support of the PBS is too large to enumerate.
> Prior work has typically limited applications to games where the public state is trivially enumerated and beliefs can be represented explicitly in physical memory
Again, this is untrue -- see:
*A Fine-Tuning Approach to Belief State Modeling* (ICLR 2022)
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: > Think of the things where a response from the author can change your opinion
As articulated above, my main concerns are regarding the presentation. (Despite the disproportionate amount of text in the weaknesses box, I liked the paper : ) !) A revised copy of the submission that addresses the presentation criticisms would change my opinion. If this is not allowed during the review period, a provision of detailed text that the authors commit to including in the paper may change my opinion.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: I think the place where the submission is in most danger of not having acknowledged its weaknesses is in the experiments section. The submission makes claims such as "We demonstrate its effectiveness empirically" (it referring to the MCMC algorithm).
However, as discussed in the weaknesses section, there are some significant limitations:
- The randomly generated policies that the submission uses do not possess the kind of structure that policies that humans or neural network-based policies have -- empirical results won't necessarily extrapolate.
- The relationship between value error and search performance is hard to quantify. Having low or high value error isn't always a good indicator of downstream search performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The reviewer's suggestions concerning related work and motivation have provided us with the means to significantly improve the presentation of the paper. This will be accomplished by modifying the abstract and introduction, replacing Sections 2.2 and 2.3 with text that motivates public belief states in a broader sense, and also text discussing the main differences between our approach and belief fine-tuning.
For the abstract and introduction, the impact of our submission does not necessarily depend on the untrue claims that 1) subgame decomposition is necessary for search in imperfect information and 2) generative methods are completely novel. For 1), as stated by the reviewer, algorithms with strong guarantees that tend to perform best require subgame decomposition. For 2), our generative approach is novel because it works in public states with support too large to enumerate and is unbiased without having to generate
and reject inconsistent histories. Ideally, we would also compare empirically to belief fine-tuning to show that this makes a difference for search, but we have not yet done so.
Moving the necessary notation to clearly explain Sections 3-5 that is currently in Section 2.2 and Section 2.3 to Section 2.1, while replacing those sections with more motivation related to HMMs, POMDPs, and Dec-POMDPs - as well as the recent work on zero-sum POSGs. FOSGs are generalizations of HMMs and POMDPs, so citing the mentioned papers is sufficient for those bodies of work. Likewise, the missing work on Poker and Hanabi AI will be mentioned for completeness.
The most important improvement our submission can make is comparing our method with BFT. BFT takes a pre-trained model as input, whereas our method is parameter-free, given the policy. One of the main advantages of BFT is that it doesn't require training the sequential generative model as a function of the policy. This is vital for the FOSG case that we focus on, and our method achieves the same goal by avoiding training that model altogether. Our approach instead requires efficient domain-specific algorithms for the construction problem and neighbor generation in the Markov chain that provably leads to irreducibility. This could be viewed as a shortcoming compared to BFT, but in domains where our approach applies, it is unbiased - unlike BFT. Another potential advantage of our approach is that it always produces valid histories according to the game rules. In some domains other than Hanabi, it is unclear how to structure the pre-trained model such that its output is always valid. Otherwise, the dynamics model needed for fine-tuning may be undefined. A concrete example of where this could happen is any trick-taking card game, where a naively structured belief state model could assign a player a card that they could not possibly hold according to the game's rules. As we show formally earlier in the paper, producing histories that correspond to observation sequences is a hard problem in itself. Our algorithm comes with clear requirements that guarantee it will apply correctly.
We will also caution the reader of the limitations of the experiments and emphasize that they are primarily intended to demonstrate the efficiency of the approach (in terms of mixing time) while still maintaining reasonable accuracy (by outperforming importance sampling). Though our experiments demonstrate that our approach is effective in the sense that we can use it to approximate PBS values in the example domain, it should be clear to the reader that this does not imply that the method will necessarily lead to better search performance.
In any case, the detail in this review is exceptionally helpful for the improvement of our paper.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Based on the author response, I am satisfied that my concerns will be addressed in the camera-ready version of the work and am raising my score accordingly. | Summary: This paper studies the algorithmic complexity of history enumeration and generation in imperfect information games. The authors define these problems with a formal model of factored observation stochastic games (FOSGs). They proved computational complexity results, and empirically demonstrated the effectiveness of an MCMC algorithm for generating history in the domain of trick-taking card games.
Strengths: The problem studied in this paper is interesting, and the authors take an original perspective. In particular, they observe the centrality of history enumeration and generation in state-of-the-art algorithms for depth-limited game solving, and provide rigorous definitions of the associated computational problems.
The work appears technically sound: both the theoretical complexity analysis and experimental study of MCMC methods for history generation.
The presentation is clear, both in prose and mathematical description.
Weaknesses: The major theoretical conclusion is that enumerating history is algorithmically hard in general, but is feasible for games that are sparse in a well-defined sense. These results are not surprising, and the authors do not pull out any deeper insights from their demonstration.
The two parts of the paper—fundamental complexity analysis and application to card games—seem quite disjoint from each other.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: The general hardness of history enumeration for FOSGs is to be expected, as the analogous problems in a single-agent context are already understood to be hard. For example, in POMDP solving one of the challenges is to approximate belief states. Could results and methods from the POMDP literature directly apply to the FOSG setting? For example, what would any of these works have to say about the multiagent version of the problem:
[1] Nonapproximability Results for Partially Observable Markov Decision Processes, Lusena et.al.
[2] What Makes Some POMDP Problems Easy to Approximate? Tsu et.al.
Additional detailed questions:
As noted above, the MCMC empirical part reads as quite a bit detached from the theoretical part of the paper. Is the point really to exercise the theory, or is it a separate contribution to develop techniques for trick-taking card games?
Is the MCMC algorithm specific to this domain or can it be generalized to a broader class of imperfect information games? Can you come up with a more general characterization of world-state structures for which this will be applicable and effective?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Enumerating history in imperfect information games itself is indeed an interesting combinatorial problem that is worthwhile studying. However, will this “enumeration” approach be plausible for more complex games? Computing exact beliefs seems tractable for Poker, but what about games like Stratego? In fact, it is unclear the relationship between an accurate belief state representation and the strength of a depth-limited search algorithm. The IS-MCTS algorithm the authors cite actually used a uniform distribution over possible world states to approximate a belief (instead of the exact belief written by the authors), which also renders fairly good empirical results. More efficient ways could be to use particle filtering [1], or deep generative models [2, 3].
[1] Monte-Carlo Planning in Large POMDPs, Silver & Veness
[2] Generalized Beliefs for Cooperative AI, Muglich et. al.
[3] Combining Tree-Search, Generative Models, and Nash Bargaining Concepts in Game-Theoretic Reinforcement Learning, Li et.al.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The non-approximability results mentioned by the reviewer are of interest to our area of work in general, but concern the hardness of computing or approximating solutions to POMDPs (in the form of optimal policies) rather than finding the sequence of unobservable states corresponding to some input sequence of observations. Those results also apply to the FOSG setting, but not in the context of history filtering.
We note the clarity issues raised by the reviewer, and want to emphasize that the sparsity result is meant to explain when enumeration is a viable option. Since it is not always the case, we introduce an approximate, but unbiased, generative method, similar to particle filtering, in the second part of the paper. Our MCMC method is intended for domains where computing exact beliefs is intractable. Games like Stratego are an intended use case, and our experiments are evidence that it can be a viable approach in domains where we can develop efficient algorithms for construction and neighbor generation in the Markov chain.
---
Rebuttal Comment 1.1:
Comment: I have read and appreciate the author response. | Summary: The paper presents theoretical analysis on the hardness of history generation in public belief states, a concept used in imperfect information games (IIGs). Then for a game where enumeration is prohibitively expensive (trick-taking card game Oh hell), they devise a specialized Gibbs sampler to generate histories from the public belief state. The sampler uses a polynomial time algorithm based on solving integer maximum flow to generate history candidates from the corresponding public state. The candidates that are then filtered to correct their distribution, i.e. to find the public belief state. In experiments, they compare the Gibbs sampler with baselines.
Strengths: 1. Clearly written and well motivated paper. It can serve as a reference for claiming whether a game is "hard" or "easy" to generate histories from a PBS.
2. Introduces a practical generative algorithm without explicit range representation for estimating PBS value in game Oh Hell.
3. Shows that Gibbs sampler can be indeed used for their setting.
Weaknesses:
1. A weakness of the paper is misrepresentation of prior work, specifically the claim that tractability has been ignored and that only games like poker were considered.
Consider the quote from the cited paper [Richards and Amir, 2012]:
> Identifying legal moves involves testing the satisfiability of arbitrary Boolean expressions and is therefore theoretically exponential in the size of the game description.
This paper involved history generation in the game of Racko, Battleship and GOPS.
There is a missing citation to [1], which also claims the problem is difficult:
> Due to exponential branching of the opponent’s private information, a large portion, or even all, of the tracked states may suddenly become incompatible with a received observation.
The paper involved history generation in the game Stratego, Phantom Tic-Tac-Toe, Goofspiel.
I concede there were no _formal_ definitions on tractability. But the claim seems too strong, as previous quotes show there was awareness on the difficulty of the problem.
2. Def 3 - I believe the definition of a sparse public state is not well formulated, for what Thm 2 attempts to state. For any game, we can take polynomial $p$ of high enough degree such that we can then claim any $G$ is always sparse. Then, according to Thm 2., solving FILTER(G,$\pi$) is always polynomial time. But this is inconsistent with Thm 1. I believe this could be resolved by adding a restriction on the degree of the polynomial $p$.
3. The paper introduces the concept of sparse public states to show the enumuration for FILTER problem can be done in polynomial time. However, authors do not take advantage of this separation of "easy" and "hard" problems. I would expect authors to use the bound to separate some games based on their hardness and an experiment that shows performance of a general algorithm on the two classes and compare that to a baseline. Instead, they found a game that is "hard" according to the bound, and devised a domain-specific algorithm that can approximate PBSs in that game.
4. For the PBS value error experiment I am missing a motivation for why this is a relevant metric. As far as I know, existing algorithms in prior work use rather infostate-value functions for approximating strategies and PBS values are not descriptive enough for the use in IIGs. Can you add a motivation, perhaps by a reference to a paper where this would be relevant?
[1] Learning to guess opponent’s information in large partially observable games. D Seitz*, N Milyukov, V Lisy.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: 1. In experiments, how is the value of a history $V^\pi(h)$ computed? I assume this is not part of the compared algorithms and it is exactly computed?
2. Can the method be trivially applied to other trick-taking card games like the mentioned Contract Bridge, Skat, and Hearts, or it is applicable only to Oh Hell? Are there trick-taking card games that have sparse PBSs (i.e. are "easy")? If yes, it would make the paper a lot stronger if your algorithm is shown to be more general for a class of trick taking games, and that it can generate PBS in both "easy" and "hard" games.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Our response to Reviewer 8l8p describes how we will address potential misrepresentations of prior work.
It's true that for any fixed, finite game, we can choose some polynomial with arbitrarily high degree to bound the size of PBS support. However, complexity arguments, such as sparsity, only apply when we consider games or family of games that scale in size.
The clarity problems in our work has lead to confusion about sparsity, the motivation for generative algorithms and our MCMC approach, and our experiments. The concept of sparsity is meant to explain when enumeration may be a viable approach in certain families of games because there PBS sizes only grow polynomially. It provides a rule of thumb to understand where important algorithms from prior work are applicable as-is.
Oh Hell is an example game where enumeration is not viable at scale, but our new MCMC approach can approximate beliefs. The domain is not chosen to be hard according to our bound, but rather to represent basic trick-taking card games, which have long been of significant interest to the community. Our experiments are in small versions of Oh Hell, and still enumerate the public state to provide an upper bound on what any sample-based generative method could hope to achieve. We also compare to an importance sampling baseline.
The rules of Contract Bridge introduce new constraints (other than 1. player must have card $c$ and 2. player cannot have suit $s$), and thus the game likely requires a different neighbor generation algorithm (and potentially a different construction algorithm as well). However, we see no reason why the algorithm would not apply to Skat and Hearts - where the constraints are the same as Oh Hell.
ReBel, introduced in Combining deep learning and search for imperfect information games (Brown et al., 2020), demonstrates that infostate values can be recovered from the expected values of public states that contain them.
$V^\pi(h)$ is indeed computed exactly in our experiments. | Summary: The paper concerns history-filtering method to estimate values of imperfect-information subgames. Such a sub-task proved to be a crucial component of previous depth-limited solving algorithm developed in the related literature.
The contribution of the paper is twofold:
- on one hand, the paper analyses the complexity of such a task from a theoretical perspective, proving its FNP-completeness in the general case, and proving that a specific condition is sufficient to make its complexity polynomial. Such a complexity is satisfied by many games customarily addressed in the literature.
- on the other hand, the paper proposes an algorithm that performs history sampling for value estimation in trick-taking games, and experimentally evaluates the performance-quality tradeoff of such an approximation.
Strengths: - Novel approach for value estimation in depth-limited computation
- Technically sound and original paper
- Possibly significant direction for future research
- Clear, intuitive and on-point explanation of the background and the relations with previous literature
Weaknesses: - The structure of the paper is unclear and bipartite. In particular:
- the first half of the paper (up to section 4) takes a general view and provides general theoretical complexity results regarding any possible history-filtering-related technique
- the second half of the paper provides a specific technique for a very specific type of game; moreover, the design of this technique is not informed by any specific actionable insight coming from the general theory developed in the previous sections
This bipartite structure has negative effects in highlighting the crucial properties of the proposed history-filtering framework. En fact, by reading the theoretical section one gets the idea that a crucial property for the applicability of the framework is that a sparse public tree is needed. However, I argue that the proposed technique is actually much more depending on the possibility of locally sampling $\bar P^\pi(h)$ (which is crucial for effectively guaranteeing an efficient sampling), while I don't see the possibly exponential number of histories in a public state to be an impossible obstacle (and actually it may be an opportunity given the scalability of the technique).
I think this partition inside the paper critically worsens clarity when reading
- The gain of the overall technique are not evaluated. While from the experiments it emerges that at least in trick-taking games Gibbs sampling is a useful technique with a good tradeoff in terms of performance and memory consumption, it is not clear whether sampling is worth in this context. How do the performance of subgame solving techniques worsen when the cutoff values are estimated in this way? (I.e. how bad is having a value error of 0.01? Is it possible that subgame solving algorithms are so "delicate" that sampling does not offer good enough approximation?)
- the main gain from using the technique proposed by the paper becomes evident only at the experimental section (Lines 329-335): given that $\bar P^\pi(h)$ can be easily recomputed locally when needed given the current history $h$, one can avoid to keep in memory the whole vector of $\bar P$. This should emerge more clearly in the previous sections (Final paragraph of Section 1, introduction to Section 5)
Technical Quality: 4 excellent
Clarity: 2 fair
Questions for Authors: I'm open to see the author's opinion on the weaknesses I've highlighted in the previous section. Other than that, I have two minor specific questions:
- Is the *"deal $\sigma$"* term employed on line 280 referring to a "suit length assignment"? If so, in which sense $\sigma \sqsubseteq h$ ?
- TTCG sampler procedure:
- what is intended as *"replacing"* a deal with another in a history? Is it always possible for any valid suit length assignment to be replaced in a given $h$? Can this be done efficiently?
- why is it not the case that $|\Omega_\sigma| = |\Omega_{\sigma'}|$, considering that the swap operations are commutative?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitation of the current analysis are properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Valid criticisms of the bipartite nature of the paper has led to clarity issues regarding our contributions. First, we seek to formalize the problem of history filtering and show that it is hard in general (Theorem 1). Theorem 2 then provides a condition called sparsity which describes the domains where the enumerative approach can be successful. This helps explain where some well-known algorithms from prior work (DeepStack, Player of Games, ReBel, etc.) can be applied as-is. Finally, we discuss a new approach that is necessarily more scalable and can approximate history filtering in basic trick-taking card games - a domain of significant interest to the community where enumeration is not tractable.
The Reviewer claims that the fact that $\bar{P}(h)$ can only be recomputed from just $h$ emerges only in the experimental section. This is not true. However, we concede that the clarity issues discussed above could be responsible. Lines 230-233 discuss how our contribution does not depend on explicitly representing beliefs, and line 289 states that the reach probabilities can be unnormalized. We also note that the reviewer is correct that, with the scalability of our technique, the possible exponential size of the PBS is not an obstacle in domains where our algorithm applies. That is one of the main advantages of our algorithm.
In terms of experiments and the gain of our overall technique, we add that the algorithm outperforms importance sampling using a small burn-in for the Markov chain. The small burn-in is evidence that the Markov Chain is rapidly mixing, which implies that our MCMC approach as a whole is efficient (i.e. samples are generated in polynomial time). We agree that future work evaluating our approach in an end-to-end algorithm that learns via solving subgames is needed.
In the TTCG Gibbs Sampler procedure, a deal is a prefix history containing all "dealing" actions. One suit length assignment can correspond to many deals. As deals are prefixes of the histories, replacing is simply changing the prefixes while holding the rest of the history (the cardplay actions) constant. It is always possible unless only a single suit length assignment remains, and the procedure is efficient (see proof of Theorem 3). For neighbor generation, only a single unit is moved between two suits for any player, so $\sigma$ and $\sigma^\prime$ have different neighbor sets. Additionally, different suit length assignments may have different numbers of corresponding deals.
---
Rebuttal Comment 1.1:
Comment: While the high level points expressed by the authors' rebuttal are rational and correct, I'm not seeing how the authors are going to act on the paper to avoid that other people will have the same issues I did.
The only planned intervention on the paper I see at the time of this comment is the one regarding the previous works. This does not address any of clarity issues raised by me and other reviewers. If this is the case, my opinion will remain unchanged. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
AIMS: All-Inclusive Multi-Level Segmentation for Anything | Accept (spotlight) | Summary: This paper proposes a new task that segments visual regions into three levels: part, entity, and their relation. The authors build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation. Extensive experiments show the effectiveness of proposed approach.
Overall, this paper is good to me in case of new concepts and unified modeling.
Strengths:
- The proposed AIMS task is interesting and novel to me, which unify several recent tasks, including entity segmentation, panoptic-part segmentation, and panoptic scene graph generation. It is a trend to unify multi-level and multi-dataset segmentation using a transformer architecture.
- The proposed module including Task Correlation Module and R-E Complementarity Module are proven effectively in experiment part.
- The design of mask prompt is interesting for multi-dataset segmentation and part-whole segmentation. The proposed method of multi-dataset multi-task training is novel. Especially the method proposes the mask prompt encoder to solve the incomplete annotations on three levels among datasets. This encoder can receive eight mask prompts and specify the corresponding region slated for segmentation. This way, the proposed method can relieve the annotation ambiguity among datasets.
- The ablation studies are good. The performance looks good compared with recent PanopticPartFormer++ and PSGTR. It also shows better generation than SAM.
Weaknesses:
- Presentation issues:
Several citations are missing. For example, L-118, “a masked cross-attention, self-attention, and feedforward network”,To be my knowledge, masked cross-attention is proposed by Mask2Former, it is better add a citation to the origin work. Also, is the pixel decoder the default decoder in Mask2Former? Moreover, several details are missing. It is hard to know the benefits
- Several papers are missing:
-Part-whole modeling paper:
[1], Hierarchical Human Parsing with Typed Part-Relation Reasoning,CVPR-2020
[2], Differentiable Multi-Granularity Human Representation Learning for Instance-Aware Human Semantic Parsing,CVPR-2021
Both works aims at solving part-whole segmentation, which is close to this work.
-Multi-task query interaction paper:
[1], Fashionformer: A simple, Effective and Unified Baseline for Human Fashion Segmentation and Recognition, ECCV-2022
[2], PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation, ECCV-2022.
Both works adopt multiple task queries for joint reasoning and attention, which is similar to the proposed method in this work.
Moreover, some related works to unsupervised grouping. For example,
[1], Unsupervised Hierarchical Semantic Segmentation with Multiview Cosegmentation and Clustering Transformers. CVPR2022.
[2], LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery. NeurlPS2022.
These works should be discussed in the related work part. Following the last item, the writing of this work can be improved in the next drat.
- The teaser could be a little fancier since it introduces a new concept. For example, it would be better to show the difference with previous segmentation settings.
- Following the last item, an overview of different property settings including panoptic segmentation, entity segmentation, multi-dataset training, prompting can be compared in one table.
- Is this possible to obtain the real results of PACO (MAP_{f1})? If so, how to extend this transformer architecture to achieve that.
- Lacking a more comprehensive ablation study on the proposed modules (Table 3a). The authors only evaluated a few combinations. For example, what is the performance if MCE and TCM are present but AM is not?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Also, I still have some questions that require the author’s clarification.
1, Could the author report the performance influence of using different architectural choices for Mask Prompt Encoder?
2, Could the author split the part-level datasets into seen and unseen categories and test the model’s generalization ability to unseen categories?
3. There are SAM models with different model sizes. Which SAM model is used for comparison?
Overall, I appreciate and like the paper’s ideas and solution for all-inclusive multi-level segmentation. The proposed method can solve the annotation inconsistency among various datasets. Compared to using many annotators to label large-scale images at three levels, the AIMS method is more efficient in utilizing multiple datasets with different-level annotations. The paper does suffer from some weaknesses, but I think its benefits outweigh its weaknesses. I hope to see those issues addressed by the authors in the rebuttal.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful comments.
### Q1: Clarity issue on the decoder.
We adhere to the transformer decoder design in the Mask2Former method, encompassing a set of learnable queries and nine transformer blocks. Each block is composed of a cross-attention layer, a self-attention layer, and a feed-forward network. In our AIMS decoder, we begin with three distinct transformer decoders designated for different levels of segmentation results, forming our baseline approach.
Building upon this foundation, we introduce the task complementarity module to facilitate information fusion between the various levels, such as entity-part and relation-entity decoders. This innovative design was inspired by our observation that separated transformer decoders consistently outperform a shared transformer decoders, a finding that is demonstrated in our table. Essentially, this implies that different level decoders benefit from having unique features distinct from each other.
---
### Q2: Missing some related works.
We will include the following related works and discuss them: Papers [1] and [2] propose a hierarchical segmentation structure for part- and entity-level segmentation, but they are exclusively targeted at human parsing, not general objects. Although [3] and [4] utilize separate queries to enhance the results of different tasks, they cannot associate mask predictions of multiple levels and prompt the outcomes. Furthermore, some unsupervised methods, such as those proposed in [5,6], group the pixels using an unsupervised clustering approach. Despite their innovative approach, they still have a significant performance gap compared to supervised methods.
[1] Hierarchical Human Parsing with Typed Part-Relation Reasoning, CVPR 2020.
[2] Differentiable Multi-Granularity Human Representation Learning for Instance-Aware Human Semantic Parsing, CVPR 2021.
[3] Fashionformer: A Simple, Effective and Unified Baseline for Human Fashion Segmentation and Recognition, ECCV 2022.
[4] PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation, ECCV 2022.
[5] Unsupervised Hierarchical Semantic Segmentation with Multiview Cosegmentation and Clustering Transformers. CVPR 2022.
[6] LASSIE: Learning Articulated Shapes from Sparse Image Ensemble via 3D Part Discovery. NeurlPS 2022.
---
### Q3: Changing the teaser figure for better illustration.
Please check figure 1 in our submitted pdf to manifest our multi-level targets against the single-level segmentation tasks.
---
### Q4: Adding the comparison to previous segmentation settings.
The table below summarizes the comparison of various settings, illustrating that our AIMS model encompasses all three levels.
| Task | relation-level | entity-level | part-level |
| :----: | :----: | :----: | :----: |
| Part Segmentation | ✗ | ✗ | ✓ |
| Panoptic/Instance/Entity Segmentation | ✗ | ✓ | ✗ |
| Scene Graph | ✓ | ✗ | ✗ |
| Segment Anthing | ✗ | ✓ | ✓ |
| AIMS (Ours) | ✓ | ✓ | ✓ |
---
### Q5: More results on PACO.
We fine-tune our pretrained AIMS model directly on the class-aware PACO dataset, achieving 45.6 $AP^{obj}$ and 18.9 $AP^{opart}$. This compares favorably with the 43.4 $AP^{obj}$ and 17.7 $AP^{opart}$ obtained using the cascaded ViT-L FPN as reported in [7]. These results suggest that our AIMS model may serve as a superior pretraining model for fine-tuning in downstream tasks.
[7] PACO: Parts and Attributes of Common Objects. arXiv 2023.
---
### Q6: Lacking an ablation study in Table 3(a).
The table below presents an ablation study of the association module. It can be observed that the association module primarily enhances the association performance, having only a positive impact on the segmentation results. This finding aligns with the observations made in Table 3(a), where only slight variations in performance were noted. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Design | PPP | COCO-PSG |
| ----- | ---- | ---- |
| Baseline | 24.5/53.4/69.7 | 38.9/40.4/50.9 |
| Basline with TCM and MPE | 26.4/55.9/71.4 | 40.5/42.0/52.6 |
| Basline with TCM, MPE and AM | 26.5/56.1/72.3 | 40.5/42.1/53.1 |
---
### Q7: The ablation study on mask prompt encoder structure.
We ablate the structure design of the mask prompt encoder in Table 4(a) of our paper.
---
### Q8: The experiments on part segmentation with seen/unseen data split.
We reorganized the PACO and PPP datasets, dividing them into 68 seen and 10 unseen object categories, and then trained only on the part with seen objects. In the following table, we compare the part segmentation results between our method and SAM on the validation dataset, including both seen and unseen categories. It becomes evident from the comparison that our model achieves performance comparable to SAM's, but with the advantage of utilizing much less training data.
| Method | seen (AR@100) | unseen (AR@100) |
| :-----: | :----: | :----: |
| SAM | 46.6 | 46.2 |
| Ours | 47.0 | 46.4 |
---
### Q9: Clarity of using the SAM model.
For the SAM model, we use the one with a VIT-Huge backbone, which is much larger than the Swin-Large backbone our model use.
For the computation cost, conducting full image inference under (800, 1333) image size in three levels requires 854.0 GFlops and takes approximately 0.142 seconds per image on A100 40G. This computation cost is comparable to the baseline method, which consumes about 801.0 GFlops and has an inference time of 0.138 seconds per image. For the SAM model we compare, it would cost 0.253 seconds per image within the same setting.
---
Rebuttal 2:
Title: Please let us know whether you have additional questions after reading our response
Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal.
We hope to address all the potential issues during the discussion period.
Thank you.
---
Rebuttal Comment 2.1:
Title: The authors solve my issues.
Comment: After I see other comments and the rebuttal, the authors provided solid results. Thus, I raise my core to 8. | Summary: This paper introduces a new task and a model for All-Inclusive Multi-Level Segmentation (AIMS), which segments images into three levels: part, entity, and relation. AIMS can also segment images based on mask prompts, which specify the region of interest. The paper proposes a unified AIMS model that uses multi-dataset multi-task training, task complementarity, association, and mask prompt encoder modules to address the challenges of annotation inconsistency and task correlation. The paper shows that the proposed method achieves better performance and generalization than existing methods on various segmentation tasks and datasets.
Strengths: 1. The AIMS task setup is interesting, which also includes the deduction of relationships between instances.
2. The prompt-based segmentation also makes sense, and it solves the multi-dataset training problem to some extent.
3. The proposed method achieves SOTA performance.
Weaknesses: 1. From the initial description of the proposed new task, it was difficult to understand why it would make sense to segment visual regions in relation level. Please clarify it.
2. The base framework consists of three separate decoders with a task complementarity module, However, the framework is relatively simple and lacks task-specific designs.
3. Regarding the operation in Formula 5, does G_split mean decompose the embedded feature into two features? The meaning of the symbol should be clearly described.
4. In Line 131, there is a mistake. "L_ce denote binary cross-entropy loss", It should be L_bce
5. The ablations in In Table 3 is incomplete. Compared to the baseline with three separate decoders for three levels, the addition of the association module improve the association performance and has some negative impact on segmentation performance on PPP. To verify the effectiveness of association module, I suggest the authors to experiment with both MPE and TCM modules and report the results.
6. It is not clear how entity-part and relation-entity association affect the final result separately. This should be included in the ablations.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please refer to weakness.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: Limitations such as failure cases and border impact should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments to improve our paper further.
### Q1: Why it would make sense to segment visual regions in relation level?
We introduce relation-level segmentation results in entity pairs for two key reasons.
First, entity pairs represent the minimal relational constructs that can depict the entire scene graph of an image. This representation facilitates downstream tasks associated with scene graph analysis or generalized referring expression segmentation. As demonstrated in Table 5(a) of our paper and the subsequent table, the AIMS model contributes to enhanced performance, outperforming state-of-the-art methods in both panoptic scene graph (COCOPSG dataset) and generalized referring expression segmentation task (gRefCOCO) [1].
| method | cIoU | gIoU |
| :-----: | :----: | :----: |
| ReLA [1] | 52.26 | 54.44|
| Ours | 52.83 | 54.96|
Second, the ability of our model to learn relation-level data allows it to learn richer and higher-level contextual features that can be used to discern visual entities. For example, given an image of a person sitting on a chair, the model can better segment the person and chair into two entity-level masks, if the model understands that some relation associates the person and chair. Although it is a relation-level subtask, it is beneficial to entity-level segmentation. Our proposed task complementarity module can effectively propagate such useful relation-level information to the entity-level decoder. Furthermore, entity pairs are used to construct “pseudo” relation-level masks that serve as a form of mask prompt for our model to learn to split such relation-level masks into individual entity-level masks. This improves our model’s capability to split or subdivide images into masks. Table 1, 2 and 3 of our supplementary file show the benefits to the entity-level results by introducing the relation-level task.
[1] GRES: Generalized Referring Expression Segmentation. CVPR 2023.
---
### Q2: The relative simplicity of the task complementarity module.
Our baseline approach employs three distinct decoders for segmenting different levels, allowing each decoder to develop task-specific features for its particular task. The effectiveness of this separation, as opposed to using shared decoders, is substantiated in the comparison provided in the subsequent table. This comparison illustrates the advantages of our segmented approach, highlighting the strong need for separate network parameters for learning level-specific features. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Decoder Design | PPP | COCO-PSG |
| :-----: | :----: | :----: |
| Shared | 23.2/52.0/67.9 | 37.2/39.1/49.2 |
| Separated | 24.5/53.4/69.7 | 38.9/40.4/50.9 |
Building on the foundation of separated task-specific decoders, we introduce the task complementarity module to enable the sharing of each task's specific features with one another. Although this module is relatively simple, it proves efficient for multi-level segmentation in the Table 3(a) of our paper. Moreover, its streamlined design enhances adaptability across various base frameworks. For example, we successfully incorporate the task complementarity module into a Mask DINO by replicating the decoders three times for three-level segmentation (which serves as our baseline). The integration of this module in Mask DINO [2] follows a pattern similar to its implementation in Mask2Former, thereby showcasing the flexibility and wide applicability of our design. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Method | PPP | COCO-PSG |
| ----- | :----: | :----: |
| Mask DINO | 23.9/52.7/68.9 | 38.0/39.2/48.7 |
| Mask DINO with TCM | 25.1/54.3/70.1 | 38.8/40.2/49.3 |
[2] Mask dino: Towards a unified transformer-based framework for object detection and segmentation. CVPR 2023.
---
### Q3: Clarity issue on Formula 5.
The $G_{split}$ means splitting the tensor along the channel dimension, which is the opposite of the concatenation operation.
---
### Q4: The modification from L_ce to L_bce.
Thanks for pointing out this mistake. We will correct it in our paper.
---
### Q5: The ablation study on the baseline with TCM and MPE in Table 3(a)
The table below presents an ablation study of the association module. It can be observed that the association module primarily enhances the association performance, having only a positive impact on the cross-level (entity-part/EP and relation-entity/RE) results. This finding aligns with the observations made in Table 3(a), where only slight variations in performance were noted. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Design | PPP | COCO-PSG |
| ----- | ---- | ---- |
| Baseline | 24.5/53.4/69.7 | 38.9/40.4/50.9|
| Basline with TCM and MPE | 26.4/55.9/71.4 | 40.5/42.0/52.6|
| Basline with TCM, MPE and AM | 26.5/56.1/72.3 | 40.5/42.1/53.1 |
---
### Q6: Independent influence of entity-part and relation-entity association.
The table below illustrates the impact of two association-dependent modules on our final segmentation results. Notably, each association module exclusively influences the association performance between its corresponding two levels, without affecting the others. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Design | PPP | COCO-PSG |
| ----- | ---- | ---- |
| Baseline | 24.5/53.4/69.7 | 38.9/40.4/50.9|
| Basline with EP association | 24.2/53.1/70.5 | 39.2/40.5/51.1 |
| Basline with RE association | 24.4/53.3/69.9 | 39.0/40.5/52.0 |
| Basline with EP and RE association | 24.3/53.1/70.6 | 39.0/40.5/52.0 |
---
Rebuttal Comment 1.1:
Title: Please let us know whether you have additional questions after reading our response
Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal.
We hope to address all the potential issues during the discussion period.
Thank you.
---
Rebuttal Comment 1.2:
Title: More clarification about limitations and border impact
Comment: ## Limitations
Our work aims to segment an image into various regions across different hierarchical levels. Specifically, we define three explicit levels: relation, entity, and part, and utilize multiple datasets for supervised training. Consequently, the trained model may be biased toward human annotations. The categorization into these levels (especially part level) is highly subjective and might not always align perfectly with all downstream applications. The concurrent work, the SAM model, also suffers from this kind of problem and merely uses 3 tokens to learn segmentation at three levels implicitly. Class-agnostic segmentation can somehow alleviate this problem but cannot fully solve it. Exploring ways to segment images hierarchically in an unsupervised manner may present a more inspiring direction, potentially eliminating annotation and human biases. This is a promising direction that we leave as future work.
## Border Impact
Our work can be used to obtain high-quality masks at multiple hierarchical levels. This high-quality mask can provide a smooth and minimal-effort experience for image editing users including amateurs and beginners. People who have image editing needs usually perform editing on the visual regions at the 3 hierarchical levels our model can handle. Our model can tremendously reduce the time, manual labor, and expertise required for selecting regions of interest for advanced image editing. On the other hand, this work will potentially bring a negative impact on the jobs and businesses of image editing experts, due to the lowered barrier to entry for amateurs who can easily perform advanced image editing with the help of our model’s high-quality hierarchical masks.
### We will add those parts to our paper. Thanks for your suggestions. | Summary: This work presents AIMS, a multi-level image segmentation model with levels representing parts, instance, and relation. Further, a curated dataset is created from several existing segmentation datasets. AIMS outperforms the baselines on the curated dataset.
Strengths: 1. The proposed architecture reasonably bridges information across levels.
2. The quantitative and qualitative performance are good.
3. Experiments, especially the ablations, are comprehensive.
Weaknesses: 1. It's unclear how the prompt mask is obtained and used during inference. When decoding with such a mask, would its experiments against baselines unfair comparisons since baselines might not have such additional localization information?
2. The comparison with SAM is partially done. Especially that SAM would provide fine segmentation masks with mask prompt with low-accuracy, while AIMS has not mentioned explicitly the quality of the mask prompt -- based on Figure 3 the masks seem to be accurate.
3. The presentation can be generally improved. Figure 2 provides details that distracts readers without further knowledge of more details, whilst the mask prompt encoder can be motivated earlier.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Does the training set aggregation make any modification of the original datasets/annotations? Would the curated dataset (split) be released?
2. The comparison in Figure 4 only shows part segmentation. What would the comparison be when it comes to multi-instance level?
3. In table 5(b), shouldn't the $AR^E$ score for EntitySeg be bold since 87.2 > 87.1?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations are not discussed by the authors. Apart from potential limitations in Weaknesses, I see there are these limitations:
1. The computational cost is unspecified.
2. The complementarity module and association module are designed to couple with the base framework, thus not obviously usable by other architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful comments.
### Q1: Unclear about the prompt mask used in inference.
For a fair comparison, we conduct all three levels of segmentation using the full-image mask prompts throughout our experiments, without unfairly introducing any specific localization or region information. In Table 5(b), we show the entity- and part-level results between the SAM model and our method, employing only full-image mask prompts for the evaluation.
To further substantiate the efficacy of our approach, we present a comparative analysis of the results in terms of AR@100. This comparison is conducted by prompting both SAM and our AIMS models with the same ground-truth entity mask for part-level inference. The outcomes, detailed in the following table, serve as additional evidence of the strengths of our method.
| Method | PACO | ADE20K|
| :-----: | :----: | :----: |
| SAM | 54.4 | 50.6 |
| Ours | 55.0 | 51.3 |
---
### Q2: The ablation study on the quality of mask prompt.
In the following two tables, we show the ablation studies about the quality of mask prompt on PACO dataset where we degrade the ground truth entity masks with different levels including the part and entity level respectively.
The performance comparison of part-level results is as follows:
| Method | Ground-Truth Mask| Bounding Box | Randomly Extended Box | Randomly Created Mask |
| ----- | ---- | ---- | ---- | ---- |
| SAM | 54.4 | 53.5 | 52.8 | 52.5 |
| Ours | 55.0 | 54.2 | 53.7 | 53.4 |
The performance comparison of entity-level results is as follows:
| Method | Ground-Truth Mask| Bounding Box | Randomly Extended Box | Randomly Created Mask |
| ----- | ---- | ---- | ---- | ---- |
| SAM | 100.0 | 98.7 | 98.6 | 98.9 |
| Ours | 100.0 | 99.1 | 98.9 | 98.8 |
For the extended bounding box, we randomly perturbed the four corners of the original bounding boxes. For the arbitrary mask, we randomly added polygon points to the extended bounding boxes to create random masks. We can provide the corresponding code to generate these boxes and masks in the next round discussion due to the limited characters.
---
### Q3: The improved presentation in Figure2.
We have improved Figure2 with a better presentation. Please check the figure 1 and 2 in our submitted PDF.
---
### Q4: Training Set.
We report about it in our supplementary file. All models are trained using five datasets: COCO, EntitySeg, PascalVOC Part, PACO, and COCO-PSG. Given that these datasets' original training and validation splits are tailored for single tasks, we collate the images and reorganize them to suit our AIMS task. Initially, we select 1069 and 1000 validation images from PPP (which covers the part and entity levels) and COCO-PSG (which covers the entity and relation levels), respectively. Following this, we eliminate any duplicate images in the unified training set that are present in the validation images, resulting in a refined training set comprised of approximately 236.7K unique images. In a nutshell, we only reorganized existing datasets and did not introduce new annotations.
We will release the curated dataset to the community for future research and reproducibility.
---
### Q5: The wrong bold location.
Thanks for pointing this out. We will correct it in our paper.
---
### Q6: Computation cost.
The table below provides the parameter size of our largest model with the Swin-Large backbone. Notably, the new proposed modules, including the task complementarity module (TCM), mask encoder prompt (ME), and association module (AE), only introduce a minor increase of parameters compared to our baseline model (backbone and baseline decoder). This is due to the efficient shared weight design implemented in the three proposed modules. Overall, the number of parameters in AIMS with the Swin-Large backbone is 246,236,372 (200845932+42626304+2630144+1384+132608), compared to the 641,090,864 number of parameters in SAM with VIT huge backbone.
| Backbone | Baseline Decoder | TCM | ME | AM |
| :-----: | :----: | :----: | :-----: | :----: |
| 200,845,932 | 42,626,304 | 2,630,144 | 1,384 | 132,608 |
For the computation cost, conducting full image inference under (800, 1333) image size in three levels requires 854.0 GFlops and takes approximately 0.142 seconds per image on A100 40G. This computation cost is comparable to the baseline method, which consumes about 801.0 GFlops and has an inference time of 0.138 seconds per image. For the SAM model we compare, it would cost 0.253 seconds per image within the same setting.
---
### Q7: The universality of proposed complementarity and association module.
We introduce the task complementarity and association modules, drawing inspiration from the state-of-the-art segmentation decoder design, which employs learnable queries and transformer blocks. This approach allows for adaptability with minimal modification, aligning with designs commonly found in recent transformer-based segmentation methods [1,2,3]. For instance, we integrate these two modules into the improved Mask DINO [1] by replicating the decoders three times for three-level segmentation (our baseline). The incorporation of these modules in Mask DINO is carried out in a manner akin to their implementation in Mask2Former, demonstrating the versatility of our design.
| Method | PPP | COCO-PSG |
| :-----: | :----: | :----: |
| Mask DINO | 23.9/52.7/68.9 | 38.0/39.2/48.7 |
| Mask DINO with TCM and AM | 25.3/54.2/71.5 | 38.9/40.3/49.9 |
[1] Mask dino: Towards a unified transformer-based framework for object detection and segmentation. CVPR 2023.
[2] MaX-DeepLab: End-to-End Panoptic Segmentation With Mask Transformers. CVPR 2021.
[3] k-means Mask Transformer. ECCV 2022.
---
Rebuttal 2:
Title: Please let us know whether you have additional questions after reading our response
Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal.
We hope to address all the potential issues during the discussion period.
Thank you.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for the informative rebuttal. They address most of my concerns and I'd like to raise my score to 6, under the assumption that some of the discussions would be included to further enhance the paper. | Summary: This paper introduces a novel task called All-Inclusive Multi-Level Segmentation (AIMS) and proposes a unified AIMS model to address the challenges of annotation inconsistency and task correlation. The model consists of a shared image encoder and three independent decoders for part, entity, and relation predictions. It incorporates a Task Complementarity Module (TCM) to fuse task information and an Association Module to establish associations between different levels of segmentation. The model is trained using a **combination** of existing segmentation datasets. The model also incorporates a Mask Prompt Encoder (MPE) to provide supervision signals.
The proposed method outperforms SOTA methods on both panoptic part segmentation and panoptic scene graph estimation tasks. The proposed method also outperforms the Segment Anything model (SAM) on entity-level segmentation, even though the proposed method was trained on much less data than SAM.
Strengths: ### Novel approach to a multi-level segmentation task
* The paper introduces a novel task called All-Inclusive Multi-Level Segmentation (AIMS) and proposes a unified model to address the challenges of annotation inconsistency and task correlation. This task formulation and the proposed model are unique and provide a fresh perspective on multi-level segmentation.
### Comprehensive experiments with detailed ablations and good results
* The paper presents a comprehensive evaluation of the proposed method, comparing it with state-of-the-art methods and conducting ablation studies.
* The experiments are well-designed, and the results demonstrate the effectiveness and generalization capacity of the proposed method.
* The authors also provide detailed explanations of the model components and training settings, ensuring reproducibility.
I also appreciate the clarity of writing in this paper which has a thorough analysis of problems in existing segmentation frameworks motivating the need for the AIMS task.
Weaknesses: ### Missing performance analysis on images with a large number of objects
* SAM has shown impressive performance on images with a large number of objects. How does the proposed method fare in such conditions? It would be useful to break down the performance on the basis of the number of objects and compare it with SAM.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * The authors mention that Swin-Large backbone provides the best results, but can you provide more details on the decoder architecture? Also, what is the total model size and computational requirement (FLOPs) of the model?
* The legend in Figure 2 is missing the symbol for "Add"
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful comments.
### Q1: Missing performance analysis on images with a large number of obejcts.
We present the recall@100 comparison of the entity-level results of between SAM and our model on selected LVIS validation data that includes more than 20 objects, as well as on crowd-human validation data. In the statistics, the average object numbers in our LVIS and crowd-human are 21 and 23, respectively. The table below demonstrates that our model can achieve performance comparable to the SAM model on images containing a large number of objects.
| method | LVIS | crowd-human |
| :-----: | :----: | :----: |
| SAM | 37.2 | 97.5 |
| Ours | 36.9 | 97.8 |
Furthermore, we compare our method with the SAM model on the object clutter indoor segmentation for robotic grasping (OCID) and the few-shot segmentation (FSS) dataset on recall@100. These comparisons highlight some advantages of our method's separated-decoder design. This is because the SAM model does not explicitly divide the results into three levels and only selects the minimum loss corresponding to the ground truth for training. This approach leads to strong requirements for user selection effort at certain levels, whereas our design mitigates this limitation.
| method | OCID | FSS |
| :-----: | :----: | :----: |
| SAM | 55.17 | 73.60 |
| Ours | 76.63 | 86.26 |
---
### Q2: The details of decoder structure.
We adhere to the transformer decoder design in the Mask2Former method, encompassing a set of learnable queries and nine transformer blocks. Each block comprises a cross-attention layer, a self-attention layer, and a feed-forward network. In our AIMS decoder, we begin with three distinct transformer decoders designated for different levels of segmentation results, forming our baseline approach.
Building upon this baseline, we introduce the task complementarity module to facilitate information fusion across different levels, such as the fusion between entity-part and relation-entity decoders. This innovative design is inspired by our observation that separate transformer decoders consistently outperform a shared transformer decoder; such observation is demonstrated in the following table. Essentially, our experimental findings support the notion that decoders at different levels possess distinct unique features compared to other decoders. The values reported in PPP dataset are $AP^P$, $AP^E$ and $AR^{ER}$. The values reported in COCO-PSG dataset are $AP^R$, $AP^E$ and $AR^{RE}$.
| Decoder Design | PPP | COCO-PSG |
| :-----: | :----: | :----: |
| Shared | 23.2/52.0/67.9 | 37.2/39.1/49.2 |
| Separated | 24.5/53.4/69.7 | 38.9/40.4/50.9 |
Our proposed task complementarity module capitalizes on these individual strengths, enabling each level decoder to interact with and be enriched by the others. This synergistic approach improves performance compared to our baseline, highlighting the efficacy of tailored, level-specific interactions within the decoder architecture.
For the association module, we directly sample positive embeddings from each level generated by the final transformer block to establish associations between two adjacent levels. It's important to note that these sampled positive embeddings are pre-processed through a fully connected (FC) layer.
---
### Q3: Model size and computational requirement.
The table below provides the parameter size of our largest model with the Swin-Large backbone. Notably, the new proposed modules, including task complementarity module (TCM), mask encoder prompt(ME), and association module (AE), only introduce a minor increase of parameters compared to our baseline model (backbone and baseline decoder). This is due to the efficient shared weight design implemented in the three proposed modules. Overall, the number of parameters in AIMS with the Swin-Large backbone is 246,236,372 (200845932+42626304+2630144+1384+132608), compared to the 641,090,864 number of parameters in SAM with VIT huge backbone.
| Backbone | Baseline Decoder | TCM | ME | AM |
| :----: | :----: | :----: | :-----: | :----: |
| 200,845,932 | 42,626,304 | 2,630,144 | 1,384 | 132,608 |
For the computation cost, conducting full image inference under (800, 1333) image size in three levels requires 854.0 GFlops and takes approximately 0.142 seconds per image on A100 40G. This computation cost is comparable to the baseline method, which consumes about 801.0 GFlops and has an inference time of 0.138 seconds per image. For the SAM model we compare, it would cost 0.253 seconds per image within the same setting.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses. They have answered all my questions. I think this paper makes a good contribution and would like to maintain my rating as "Accept". | Rebuttal 1:
Rebuttal:
We thank the reviewers for the insightful comments regarding our work. We have carefully addressed each of your concerns, and our responses can be found in the respective rebuttal sections for each reviewer.
In addition, we have included a PDF containing three figures, as requested by some of the reviewers. We will refine our paper in light of your insightful advice.
Pdf: /pdf/9ed8db5db5a15c3873f516a77f632e0108fcf503.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes a unified multi-level segmentation (AIMS) approach. For better generalisation, the model is concurrently trained on multiple dataset consisting varied hierarchical level annotations across parts, entities and relations. In order to utilise signals from multiple hierarchical level as well as infuse model with level-awareness for each training sample, AIMS proposes three modules - task complementarity, association and mask prompt encoder. Experimental evaluation shows AIMS performing better than other state-of-the-art class-aware as well as class-agnostic segmentation.
Strengths: Originality
Although model is inspired from Mask2former, the inclusion of other novel modules packages AIMS for multi-level hierarchical segmentation. Thus I note sufficient novelty in the proposed architecture.
Quality
The paper is well-written and easy to follow. Authors can be more clearer upfront about AIMS being complete class-agnostic model (i.e., mask proposal model) and there is no semantic labelling.
Significance
1. As far as I am aware of, AIMS is the first segmentation model to be trained with multi level semantic understanding. Though SAM is trained with fine-grained annotations it lacks part- and relation-level understanding and thus certain fine-grained semantic features may not be fully segmented out.
2. Moreover, training with multiple datasets enables AIMS to be deployed for range of image editing and manipulation applications.
3. AIMS achieves these advantages with as minimal data as possible (237K vs 11M in SAM).
Weaknesses: 1. As pointed earlier, AIMS is class-agnostic. Hence it cannot label the output segments.
2. AIMS use interactive settings by deploying mask prompt encoder (MPE) to specify model what to segment. However, MPE usage is limited to providing unmask region of an input image for model to act on. There is no provision to use text, bounding box, or points as inputs. This is very limited interactive settings.
3. For the same reason though powerful, AIMS cannot extract referring expression segmentation. This is despite the fact that AIMS is trained with semantic of relation understanding.
4. Overall, the model is novel but the evaluation lacks distinguishing aspects that AIMS in the very first place was designed for.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: AIMS model have three separate output while other models output single mask irrespective of entity our parts. So in table 5(b), how do one measure seperate AR for parts and entity segmentation. Do you run seperate inference with different prompt for part level and entity level ? Or full-image mask prompt is applied ?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors haven't addressed any limitations. Some of the listed weaknesses above can be elaborated to address the limitations the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful comments.
### Q1: AIMS is class-agnostic model and cannot label the output segments.
Thanks for the suggestion. Although mask labeling is related to our work, the main goal of our work is to develop an all-inclusive segmentation model capable of delivering high-quality multi-level mask proposals for various downstream tasks, including class-aware ones. That's because class-agnostic segmentation has shown great effectiveness to the unseen categories or image domain than class-aware one in entity segmentation [1,2] or SAM [3]. In Table 5(a) of our paper, we demonstrate the advantages of fine-tuning the AIMS pre-trained model for class-aware tasks such as panoptic part segmentation and scene graph. This approach leads to performance that outperforms the state-of-the-art methods of each task, showcasing the flexibility and effectiveness of our model even on class-aware tasks.
Furthermore, AIMS works complementarily with existing mask labeling methods [4,5,6] that can be applied to external mask proposals.
[1] Open World Entity Segmentation. TPAMI 2022.
[2] High-Quality Entity Segmentation. ICCV 2023.
[3] Segment Anything. ICCV 2023.
[4] Open-vocabulary semantic segmentation with mask-adapted clip
. CVPR 2023.
[5] Open-Vocabulary Universal Image Segmentation with MaskCLIP. ICML 2023.
[6] Scaling Open-Vocabulary Image Segmentation with Image-Level Labels
. ECCV 2022.
---
### Q2: Limited interactive settings like text, bounding box or points.
Our AIMS model is not designed for interactive segmentation. Instead, we focus on automatic image segmentation at multiple explicit levels in an all-inclusive manner, without requiring intricate prompt engineering and parameter tuning during inference to obtain desired results, as SAM needs. SAM's weakness is that it does not enforce explicit levels of segmentation and only selects the prediction that has the smallest loss against ground truth for training. Therefore, our setting and the proposed method is more desirable for large-scale automated segmentation mask generation without costly human intervention and correction.
Interactive settings are usually sensitive to geometric prompts. For instance, using two-point prompts with slightly different locations in SAM can give very different mask predictions. Furthermore, for many applications that do not rely on user inputs (e.g., automated image content analysis), using a regular grid of points as prompts is unfavorable since the model can fail to segment objects that do not overlap with those points.
While our proposed framework can accept mask prompts, this feature is part of our design to address annotation inconsistencies among datasets. It encourages the network to learn how to perform further separation on a given mask instead of functioning as an interactive segmentation component. We do not expect users to be able to draw out sufficiently good mask prompts for our model.
---
### Q3: Lacking ability for referring expression segmentation.
Similar to the first question, one of AIMS's roles is to obtain good pretrained weights for finetuning on downstream tasks. We test AIMS model in generalized referring expression segmentation [7] on gRefCOCO dataset.
The detailed structure design proceeds as follows: First, we obtain the text embedding using the BERT model. Within each level of the decoder, we integrate three additional transformer blocks. These blocks include a cross-attention layer between the query and text embedding, a cross-attention layer between the query embedding and image features, self-attention, and a feedforward network. For each cross-attention, query embedding is the query and other keys and values. These additional layers are appended to the default nine transformer layers. Ultimately, the three-level results are merged for bi-partite matching. During inference, we directly select the mask with the highest score. To ensure a fair comparison, we employ a similar number of training iterations to the ReLA method [7], utilizing the Swin-Tiny backbone.
The comparison results are presented in the table below, demonstrating the generalization ability of the AIMS model in the context of referring expression segmentation.
| method | cIoU | gIoU |
| :-----: | :----: | :----: |
| ReLA [7] | 52.26 | 54.44|
| Ours | 52.83 | 54.96|
[7] GRES: Generalized Referring Expression Segmentation. CVPR 2023.
---
### Q4: Evaluation Metrics.
Our evaluation metrics face the complex challenge of assessing three performance levels simultaneously, making it difficult to devise a unified metric that accurately balances the weight across these levels. Consequently, we opt for decoupled evaluation metrics tailored to each level, ensuring a better model demonstrates enhanced performance across all three levels. This approach not only allows us to pinpoint specific areas of performance improvement when implementing new designs but also helps validate our method's effectiveness.
For unified evaluation metrics, we explore it in downstream tasks such as PartPQ and PWQ for panoptic part segmentation and R/mR@100 for panoptic scene graph construction, as shown in Table 5(a) of our paper. That shows our model also performs better than state-of-the-art methods by using unified evaluation metrics.
---
### Q5: The detailed inference setting in Table 5(b).
Using a full-image mask prompt, we compare the entity- and part-level results between SAM and our model. To ensure a fair comparison in the part-level results, we show the result comparison in AR\@100 in the following table by utilizing the ground-truth entity-level masks as mask prompts for part-level inference.
| method | PACO | ADE20K|
| :-----: | :----: | :----: |
| SAM [3] | 54.4 | 50.6 |
| Ours | 55.0 | 51.3 |
---
Rebuttal 2:
Title: Please let us know whether you have additional questions after reading our response
Comment: We appreciate your reviews and comments. We hope our responses address your concerns. Please let us know if you have further questions after reading our rebuttal.
We hope to address all the potential issues during the discussion period.
Thank you. | null | null | null | null | null | null |
Semi-Supervised Contrastive Learning for Deep Regression with Ordinal Rankings from Spectral Seriation | Accept (poster) | Summary: This paper tried to deal with semi-supervised deep regression problems via contrastive learning. To fully use the unlabeled data, they let the feature similarity between unlabeled samples be in agreement with the ranks. Therefore, the accurate ordinal relationship can be recovered through spectral seriation algorithms via feature similarity.
Strengths: 1. The paper uses contrastive learning for unlabeled data by fully considering the ordinal relationship.
2. The paper adopted a robust ranking method named spectral seriation and boost the overall contrastive performance.
3. Real-world application experiments on medical imaging processing demonstrate its effectiveness and significance.
Weaknesses: 1. It seems that the paper does not have obvious weaknesses, if some, the main weaknesses are in the model design Section. I do not understand how to optimize equation (7).
2. Some descriptions need to be more clear. For example:
Line 3, "only unlike for" -> "only, unlike for"?
Line 28 said, "This is not possible...". What does "This" refer to?
Line 105, what is "Fiedler vector"?
3. Some typos:
Line 105 "ofL".
Equation (7), what is the meaning of $R^{'} - R^{'}_{[i]}$?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why does the proposed method need to rank the supervision of unlabeled predictions with seriation rankings?
2. Have you considered other baseline methods? Like a simple SimClr method for unlabeled data?
3. Equation (7), what is the meaning of $R^{'} - R^{'}_{[i]}$? Could you provide more explanations?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive comments! We address remaining concerns below.
\
### W1) Optimizing Eq. 7
We make use of the differentiable combinatorial solver proposed in [1] to optimize Eq. 7. The loss function $\ell$ enforces predictions for discrete combinatorial values, such as rankings, to follow some ground-truth order. A high level explanation of $\ell$ is that it makes use of interpolation methods to allow the discrete combinatorial predictions to be differentiable. We do not provide detailed descriptions of this function since we directly use the implementation from [1], and it is better explained in the original work. We will include an intuitive explanation in our revision however.
\
### W2) Clearer descriptions
>Line 28 said, "This is not possible...". What does "This" refer to?
"This" refers to using unlabeled data only for unsupervised contrastive learning.
>Line 105, what is "Fiedler vector"?
The Fiedler vector is defined as the eigenvector corresponding to the smallest non-zero eigenvalue of a matrix [2].
We will make the necessary clarifications in the revision.
\
### W3) Meaning of $R' - R'_{[i]}$
$R' - R'_{[i]}$ is a vector obtained by subtracting every element in the ranking vector $R'$ by the $i$th element of $R'$.
For example, if
$R' = [1,5,2,3,4]$,
$R' - R'_{[3]}$ is calculated by subtracting every element in $R'$ by the third entry of $R'$, i.e. 2.
This gives us $[-1,3,0,1,2]$.
This is used to supervise the relative relationships between similarity values in matrix $S'$. Intuitively, if the ranking of sample $i$ is closer to sample $j$ compared to sample $k$, then the cosine similarity between samples ${i,j}$ should also be greater than samples ${i,k}$. $R' - R'_{[i]}$ calculates the ranking difference between samples relative to sample $i$.
\
### Q1) Supervising unlabeled predictions with seriation rankings
We note that the rankings obtained through spectral seriation is robust to noise and has error correction properties. Theoretical bounds are derived in Theorems 2 and 3.
Because of this, using the recovered rankings to supervise output predictions should have additional benefits compared to without supervision. Furthermore, this also ensures that the feature relationships learnt through contrastive learning are consistent with the predicted outputs on unlabeled samples.
We empirically verify that including this constraint leads to additional improvements through ablation experiments in Figure 3 and Tables S2 and S3 of the supplementary materials.
\
### Q2) Using SimCLR for unlabeled samples
Contrastive learning methods for classification generally do not work well when applied to regression, since they are unable to consider label distance and order relationships. The loss function treats all negative pairs equivalently without regard to its label value, which often results in ineffective features being learnt. This has been verified in existing works such as [3-4].
We can also confirm that applying SimCLR to unlabeled samples only also does not work well, as it disrupts features learnt by the contrastive regression loss function on labeled samples. We show results when applied to our three tasks.
**Synthetic dataset, 1/4 labels (Section 3.1)**
| |MAE|
|---|---|
|SimCLR on unlabeled samples|0.059|
|CLSS (ours) |0.027|
**Brain age estimation, 1/2 labels (Section 3.2)**
| |MAE|
|---|---|
|SimCLR on unlabeled samples|10.95|
|CLSS (ours) |9.37|
**Age estimation, 1/25 labels (Section 3.3)**
| |MAE|
|---|---|
|SimCLR on unlabeled samples|9.91|
|CLSS (ours) |9.59|
\
### Q3) Meaning of $R' - R'_{[i]}$
See response to W3.
\
[1] Pogančić, Marin Vlastelica, et al. "Differentiation of blackbox combinatorial solvers." ICLR. 2019.
[2] Atkins, Jonathan E., Erik G. Boman, and Bruce Hendrickson. "A spectral algorithm for seriation and the consecutive ones problem." SIAM Journal on Computing 28.1 (1998): 297-310.
[3] Dai, Weihang, et al. "Adaptive contrast for image regression in computer-aided disease assessment." IEEE Transactions on Medical Imaging 41.5 (2021): 1255-1268.
[4] Zhang, Shihao, et al. "Improving Deep Regression with Ordinal Entropy." arXiv preprint arXiv:2301.08915 (2023).
---
Rebuttal Comment 1.1:
Title: After the rebuttal
Comment: Thanks for the responses. Most of my concerns have been addressed, I am happy to keep my score. | Summary: This paper extends contrastive regression methods to a semi-supervised setting using unlabeled data. They leverage the feature similarity matrix between these samples to infer ordinal relationships, a process guaranteed robust if the error is within defined bounds.
Strengths: This paper successfully extends contrastive regression to include unlabeled datasets, marking a significant step in semi-supervised learning. The authors' writing style is commendable clear, succinct, and easy to follow. Notably, the proposed method isn't just empirical; it is firmly grounded in theoretical evidence, which enhances its credibility and potential for practical application.
Weaknesses: * Ablation studies are missing, ref L2,3
* Missing explanations on loss function design, for each component, what the purpose
* Section 2.3 is not clear, ref Q5
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Why "contrastive learning that are only able to use labeled data" ? contrastive learning, e,g, Simclr, MOCO, etc are under unlabeled data
* How to achieve similarity matrix S?
* why did you set the loss function like formulation (9), for each part, would you give intuitive explanations?
* In eq (7,8,9), how 's the ranking function $l()$ look like?
* For section 2.3, what are the intuitive explanations of Theorem 2,3, what insights can be taken away? what do the theorems trying to prove?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: * Ablation studies on each loss component are missing
* Ablation studies on hyperparameters are missing
* Section 2.3 is not clear enough
I may have misunderstood, if you solve my question, I am willing to increase the score
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the encouraging feedback! We address questions below.
\
### Q1) Labels for contrastive learning
To clarify, it is possible to perform contrastive learning using unlabeled data **for classification** (e.g. SimCLR, MOCO).
It is **NOT possible** for existing contrastive learning methods to use unlabeled data **for regression**.
This is because regression labels **reflect order and distance relationships between samples** [1-2], but for classification, we only need to distinguish between different classes. Effective feature representations for regression therefore **need to reflect label distance relationships** unlike for classification.
Intuitively, for samples $i,j,k$, if we have $y_i=5$, $y_j=6$, and $y_k=20$, we expect features for sample $i$ should be more similar to sample $j$ compared to sample $k$ since their labels are closer together. To enforce this, **it is necessary to know the labels**.
In our work, we propose a novel method that **extracts order relationships from unlabeled samples as well** through spectral seriation for contrastive learning.
\
### Q2) Calculating similarity matrix $S$
$S$ is obtained by calculating cosine similarity between features within a batch.
We use $\tilde{z}$ to denote L2 normalized features. $\tilde{z}$ has dimensions $B \times F$ where $B$ is the batch size and $F$ is the feature length.
$S$ is equal to $\tilde{z} \tilde{z}^T$ and has dimensions $B \times B$.
\
### Q3) Explanation of loss function (Eq. 9)
Eq. 9: $\mathcal{L} = \mathcal{L}^{SR} + w_{SC} \mathcal{L}^{SC} + w_{UC} \mathcal{L}^{UC} + w_{UR} \mathcal{L}^{UR}$
$\mathcal{L}^{SR}$ is the supervised regression loss (Eq. 2).
$\mathcal{L}^{SC}$ is the supervised contrastive loss for regression. We use Ordinal Entropy [2] in our experiments (line 184). This ensures that the model learns features that are consistent with the distance relationships between sample labels. Pairs with labels closer together will have features that are more similar compared to pairs with labels further apart.
$\mathcal{L}^{UC}$ is the unsupervised contrastive loss for unlabeled samples (Eq. 7). Because labeled features are trained using contrastive learning, the similarity matrix for unlabeled features will also reflect order and distance relationships between them, but with noise. We can use spectral seriation to extract the ordinal ranking between unlabeled samples from this noisy similarity matrix.
By obtaining the ordinal ranking of unlabeled samples, we can infer the relative relationship of similarity values for feature pairs. Intuitively, if sample $i$ is ranked closer to sample $j$ than to sample $k$, then the cosine similarity between feature pairs ${i,j}$ should be larger then ${i,k}$. These relationships are enforced through $\mathcal{L}^{UC}$, thus allowing us to perform contrastive learning using unlabeled samples.
$\mathcal{L}^{UR}$ is the unsupervised ranking loss, which we impose on predictions of unlabeled samples (Eq. 8). Because spectral seriation is robust to noise, the recovered ordinal rankings should also be useful for supervising prediction outputs. This also enforces consistency in the order of features and predictions. We enforce the same pairwise relationships inferred in $\mathcal{L}^{UC}$ on predictions using $\mathcal{L}^{UR}$.
\
### Q4) Ranking function $\ell$ in Eq. 7-9
We directly use the differential combinatorial solver proposed in [3] for $\ell$. On a high level, $\ell$ uses interpolation methods to allow discrete combinatorial inputs (e.g. rankings), to be differentiable. We do not provide details since it is better explained in the original work. However, we will include an intuitive explanation in our revision.
\
### Q5) Explanation of Section 2.3
One advantage of spectral seriation is that it is based on error minimization and is therefore robust to noise. To understand this intuitively, we can imagine a loss surface and its optimum point at the surface minimum. The optimum point will remain the same even if the surface is perturbed, as long as the distortions are within certain limits.
Theorems 2 and 3 illustrates this formally through deriving theoretical bounds for noise tolerance. We demonstrate robustness to two different kinds of noise: noise randomly distributed across the entire surface (Theorem 2), and noise from a single poorly learned feature (Theorem 3).
\
### L1) Loss component ablation
Component ablation results on the synthetic dataset are illustrated in Figure 3. Results for brain age estimation and age estimation from photos are given in supplementary materials Table S2 and S3.
\
### L2) Hyperparameter ablation
We perform hyperparameter ablation by adjusting parameters individually and keeping others at optimum:
**Synthetic dataset, 1/4 labels (Section 3.1)**
Optimum at $w_{SC}=w_{UC}=w_{UR}=0.001$; MAE 0.027
| |MAE|
|---|---|
|$w_{SC}=0.01$|0.033|
|$w_{SC}=0.0001$|0.030|
|$w_{UC}=0.01$|0.032|
|$w_{UC}=0.0001$|0.031|
|$w_{UR}=0.01$|0.052|
|$w_{UR}=0.0001$|0.030|
**Brain age estimation, 1/2 labels (Section 3.2)**
Optimum at $w_{SC}=1,w_{UC}=0.05,w_{UR}=0.01$; MAE 9.37
| |MAE|
|---|---|
|$w_{SC}=5$|12.73|
|$w_{SC}=0.2$|9.42|
|$w_{UC}=0.5$|10.24|
|$w_{UC}=0.005$|9.45|
|$w_{UR}=0.1$|9.49|
|$w_{UR}=0.001$|10.61|
**Age estimation, 1/25 labels (Section 3.3)**
Optimum at $w_{SC}=1,w_{UC}=0.05,w_{UR}=0.01$; MAE 9.59
| |MAE|
|---|---|
|$w_{SC}=5$|9.73|
|$w_{SC}=0.2$|9.91|
|$w_{UC}=0.5$|10.70|
|$w_{UC}=0.005$|9.64|
|$w_{UR}=0.1$|9.97|
|$w_{UR}=0.001$|9.65|
### L3)
See Q5.
\
[1] W, Dai et al. "Adaptive contrast for image regression in computer-aided disease assessment." IEEE Transactions on Medical Imaging 41.5 (2021): 1255-1268.
[2] S. Zhang et al. "Improving Deep Regression with Ordinal Entropy." arXiv preprint arXiv:2301.08915 (2023).
[3] M.V. Pogančić, et al. "Differentiation of blackbox combinatorial solvers." ICLR. 2019.
---
Rebuttal Comment 1.1:
Title: Reply to author
Comment: Thank you for providing the answers, which clear up some of my confusion. So, I keep my score. | Summary: In this paper, the authors propose a sophisticated method for deep semi-supervised learning using contrastive loss function. The main idea is to estimate the ordinal relations among the unlabeled data samples by using the similarity matrix of the unlabeled data samples and then use these relations to improve the contrastive learning. To this end, the authors utilize the spectral seriation algorithm of [2]. The proposed method is tested on several datasets and the authors report better accuracies compared to the state-of-the-art.
Strengths: The main strengths of the paper can be summarized as follows:
1) The paper is generally well-written with the exception of Related Work section.
2) The proposed method is sound and supported with theoretical arguments. Estimating the ordinal rankings of the unlabeled data via similarity matrix makes sense and it seems the idea is working. I really liked the idea.
3) The proposed method achieves the state-of-the-art accuracies on all tested datasets.
Weaknesses: The main limitations of the paper can be summarized as follows:
1) Related work section is quite weak and its location in the manuscript is wrong. Please move it to Section 1 after Introduction. Also, there are some important related missing references. I listed some of them below. Especially, the one using graphs and Laplacian matrix is directly related to the proposed method is here since the ordinal rankings are found by using Laplacian matrix in the paper.
2) Using cosine similarities only is a big limitation since it constraints all the feature space onto the boundary of a hypersphere. Please note that two samples very far from each other in the feature space may collapse to the similar points on the unit hypersphere. In contrastive loss, Euclidean distances are more preferred unless additional tricks are not done in the regression head (e.g., ArcFace, Cosface etc. normalize both features and classifier weights in their classification losses).
3) I wonder if the authors use the loss function given in (9) directly. I do not see any constraint to enforce the feature samples to lie on the boundary of the unit hypersphere. In that case, using cosine distances is not a good choice.
4) Please explain how the Fiedlar vector is computed.
5) There are 3 parameters in the proposed loss function. Please describe how they are set.
References
[R1] Mohan Timilsina, Alejandro Figueroa, Mathieu d’Aquin, Haixuan Yang. Semi-supervised regression using diffusion on graphs, Applied Soft Computing Journal, 2021.
[R2] Jean et al., Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing
Predictive Variance, Neurips, 2018.
[R3] Xu et al., Semi-supervised regression with manifold: A Bayesian deep kernel learning approach, Neurocomputing, 2022.
[R4] Fazakis et al., A multi-scheme semi-supervised regression approach, Pattern Recognition Letters, 2019.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1) I wonder if the authors use the loss function given in (9) directly. I do not see any constraint to enforce the feature samples to lie on the boundary of the unit hypersphere. In that case, using cosine distances is not a good choice.
2) Please explain how the Fiedlar vector is computed.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors do not address any limitation, but the main limitation is using cosine distances in the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed comments! We are glad you liked our idea! We address remaining concerns below:
\
### W1) Improvements to Related Work section
Thank you for your helpful feedback. These will be considered in our revision.
\
### W2) Use of cosine similarity
It is true that using cosine similarity limits features to a unit hypersphere after normalization. However, existing state-of-the-art contrastive learning methods for classification (e.g. SimCLR [1], MOCO [2], SupCon [3], etc.) and regression (AdaCon [4], Ordinal Entropy [5]), all make use of L2 normalized features and cosine similarity for contrastive learning.
Because the focus of this work is on **how to utilize additional signals from unlabeled samples** for semi-supervised contrastive regression, we build on top of these existing contrastive regression approaches. We do not explore improved contrastive learning loss functions that do not make use of cosine similarity in this work.
Specifically, our method (CLSS) utilizes Ordinal Entropy [5] for contrastive loss, which is implemented after applying L2 normalization on the feature layer (see lines 73, 75, 80, and 184). Regression predictions are obtained using a fully-connected regression layer directly after the features without L2 normalization.
\
### W3) Loss function in Eq. 9
The loss function in Eq. 9 is applied directly. To clarify, we do not **explicitly enforce** the features to lie on a unit hypersphere. Instead, we apply contrastive loss to the feature layer **after first applying L2 normalization**. This is also the standard approach used in existing state-of-the-art methods for contrastive learning [1-5]. In our method, we use Ordinal Entropy [5] for contrastive learning, which applies L2 normalization on the feature layer before applying the loss function (see lines 73, 75, 80, and 184). The fully-connected regression layer is directly applied to features without normalization to obtain the regression prediction. We will state this more clearly in our revision.
\
### W4) Computation of Fiedlar vector
The Fiedlar vector is defined as the eigenvector corresponding to the smallest non-zero eigenvalue of a matrix [6]. It can be computed through standard eigenvalue decomposition. We will state this more clearly in our revision.
\
### W5) Parameters for loss function
The three parameters for the loss function in Eq. 9 ($w_{SC}, w_{UC}, w_{UR}$) are determined by performing a coarse search on the validation set. We include hyperparameter ablation results below for reference. Parameters are individually adjusted whilst keeping the remaining ones at optimum value:
**Synthetic dataset, 1/4 labels (Section 3.1)**
Optimum at $w_{SC}=w_{UC}=w_{UR}=0.001$; MAE 0.027
| |MAE|
|---|---|
|$w_{SC}=0.01$|0.033|
|$w_{SC}=0.0001$|0.030|
|$w_{UC}=0.01$|0.032|
|$w_{UC}=0.0001$|0.031|
|$w_{UR}=0.01$|0.052|
|$w_{UR}=0.0001$|0.030|
**Brain age estimation, 1/2 labels (Section 3.2)**
Optimum at $w_{SC}=1,w_{UC}=0.05,w_{UR}=0.01$; MAE 9.37
| |MAE|
|---|---|
|$w_{SC}=5$|12.73|
|$w_{SC}=0.2$|9.42|
|$w_{UC}=0.5$|10.24|
|$w_{UC}=0.005$|9.45|
|$w_{UR}=0.1$|9.49|
|$w_{UR}=0.001$|10.61|
**Age estimation, 1/25 labels (Section 3.3)**
Optimum at $w_{SC}=1,w_{UC}=0.05,w_{UR}=0.01$; MAE 9.59
| |MAE|
|---|---|
|$w_{SC}=5$|9.73|
|$w_{SC}=0.2$|9.91|
|$w_{UC}=0.5$|10.70|
|$w_{UC}=0.005$|9.64|
|$w_{UR}=0.1$|9.97|
|$w_{UR}=0.001$|9.65|
\
\
### Q1) Loss function in Eq. 9
See response to W3.
\
### Q2) Computation of Fiedlar vector
See response to W4.
\
[1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020.
[2] He, Kaiming, et al. "Momentum contrast for unsupervised visual representation learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[3] Khosla, Prannay, et al. "Supervised contrastive learning." Advances in neural information processing systems 33 (2020): 18661-18673.
[4] Dai, Weihang, et al. "Adaptive contrast for image regression in computer-aided disease assessment." IEEE Transactions on Medical Imaging 41.5 (2021): 1255-1268.
[5] Zhang, Shihao, et al. "Improving Deep Regression with Ordinal Entropy." arXiv preprint arXiv:2301.08915 (2023).
[6] Atkins, Jonathan E., Erik G. Boman, and Bruce Hendrickson. "A spectral algorithm for seriation and the consecutive ones problem." SIAM Journal on Computing 28.1 (1998): 297-310.
---
Rebuttal Comment 1.1:
Title: reply to the author
Comment: The authors should seriously consider to move the related work subsection to the Introduction. Using L2 norm before the loss already enforces the samples to lie on the boundary of the hyperpshere and this is a necessary step if one uses cosine distances, this addresses teh issue I pointed out. In general, I liked the paper and I keep my initial rating, accept. | Summary: The paper presents a novel approach towards extending contrastive learning methods for deep regression in a semi-supervised setting. The authors address the challenge of using unlabeled data for contrastive learning in deep regression tasks by applying spectral seriation algorithms to infer the ordinal relationship between unlabeled samples. Empirically, they show that the proposed method improves performance on medical datasets.
Strengths: - The motivation for the paper is clear, and the idea is novel from what I can tell.
- The authors provide a comprehensive robustness analysis of their method, with theoretical proofs showing its resilience to different types of noise.
- The paper is well-organized and logically structured, with clear explanations.
Weaknesses: I understand that the motivation for this paper is mainly for medical datasets, but it would be great to test whether the proposed method also works beyond medical setting, as it does seem quite general.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - What is the computational complexity of the proposed method? How does the training and inference time compare to other state-of-the-art methods?
- How well would the proposed method perform on more diverse and complex datasets, especially beyond the medical setting?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments! We are glad you found our idea novel and well supported theoretically. We address remaining questions below:
\
### W1) Application beyond medical settings
See response to Q2
\
### Q1) Computational complexity
Our proposed method, CLSS, does not introduce significant computational complexity compared to state-of-the-art methods. CLSS requires calculating eigenvalues and eigenvectors for Laplacian matrix $L$ (Eq. 6), but this can be done efficiently with existing computational tools and algorithms. Test-time inference is also more efficient than state-of-the-art semi-supervised methods because we only require predictions from **one** model, instead of taking the average from **two** co-trained models.
We show some reference times below for the synthetic dataset used in Section 3.1. We report the time taken in seconds for one iteration of training and inference for different methods. We use the same batch sizes for all methods as stated in the manuscript for fair comparison.
| |Training (seconds per iteration)|Testing (seconds per iteration)|
| ----------- | ----------- | ----------- |
| Regression | 0.2015 | 0.0013 |
| Regression + Contrastive Loss | 0.2167 | 0.0012 |
| Mean-teacher | 0.2145 | 0.0012 |
| CPS | 0.2022 | 0.0018 |
| UCVME| 0.2487 | 0.0043 |
| CLSS (Ours)| 0.2310| 0.0013|
Our method CLSS has competitive inference times for testing (0.0013). Training time is also faster than UCVME (0.2310 vs 0.2487), the best performing alternative, since UCVME performs variational inference during training.
CLSS also has smaller model size compared to state-of-the-art semi-supervised approaches. CLSS only uses **one** model, whilst alternative methods rely on **two** co-trained models, thereby doubling the memory required.
| |Number of parameters|
| ----------- | ----------- |
| Regression | 34,401 |
| Regression + Contrastive Loss | 34,401 |
| Mean-teacher | 68,802 |
| CPS | 68,802 |
| UCVME| 69,004 |
| CLSS (Ours)| 34,401|
Overall, our method is computationally efficient and memory efficient. We will include these details in our revision.
\
### Q2) Performance on general datasets
To clarify, we performed experiments on **three different types of datasets** in our manuscript:
1. a synthetic dataset for solving partial differential equations (PDEs) through operator learning (Section 3.1)
2. a medical dataset for brain age estimation from MRI scans (Section 3.2)
3. a natural image dataset for age estimation from photographs (Section 3.3)
This is also stated at the start of Section 3.
In the manuscript, we emphasize the benefits of our method for medical imaging analysis because semi-supervised deep regression is particularly valuable for such realistic problem settings. However, our experiments demonstrate that CLSS **can similarly be applied to different problem types**, such solving PDEs and regression benchmark tasks like age estimation.
We will make sure that this is stated more explicitly in our revision to avoid confusion. | Rebuttal 1:
Rebuttal: We thank the reviewers for taking the time to provide their thoughtful comments and feedback. We are glad that in general, the reviewers found our method novel and well supported theoretically. We individually address remaining questions and concerns below. These will also be included in our final revision to help strengthen the manuscript. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Frequency-domain MLPs are More Effective Learners in Time Series Forecasting | Accept (poster) | Summary: This paper presents FreTS, a framework that addresses both channel-wise and time-wise dependency learning in the frequency domain for time series prediction. FreTS introduces a specially designed frequency-domain MLP structure that processes the real and imaginary parts of the frequency components interactively. The experimental results on 13 benchmarks demonstrate the superior performance of FreTS compared to other state-of-the-art methods.
Strengths: - The two-stage framework and the frequency-domain MLP module introduced in FreTS exhibit some novelty.
- The experimental results across 13 benchmarks show the potential of the proposed method.
Weaknesses: - The authors failed to motivate their method both theoretically and in relation to prior work. 1) The authors did not clearly define and explain the terms "point-wise mappings" and "information bottleneck", not elaborate on how existing methods suffer from these issues, and not clarify how the proposed method addresses and alleviates these problems. 2) While the authors claim that their method has a "global view", it did not highlight how it differs from previous MLP-based and Transformer-based methods in capturing long-range dependencies. 3) The claim of the advantage of “energy compaction” is not supported by experimental evidence.
- The related work section lacks logical organization and fails to provide a comprehensive summary of previous methods.
- The method is not well presented. 1) For the Frequency Temporal Learner, the validity behind performing DFT along the channel dimension needs further discussion. It is more likely to be based on engineering considerations than a theoretical grounding. 2) The rationale behind the design of the proposed frequency-domain MLP is unclear regarding why it calculates the new real and imaginary components as Eq. (7).
- Additional experiments are needed to highlight the benefits of the proposed approach, e.g. the learned global periodic patterns, and the robustness towards noise. Including these experiments will strengthen the empirical evidence for the effectiveness of the proposed method.
- Certain phrases or sentences lacks clarity, and need improvements. E.g., “stacked MLP layers”, “learning in the frequency spectrum helps capture a greater number of periodic patterns”, .etc.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Are the “Domain Conversion” and “Domain Inversion” in Fig. 2 differentiable? How is the network trained end-to-end?
- What are the meanings of the axes in Fig. 1? Whether the results are from a time series or a dataset?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors are encouraged to discuss the computational complexity after introducing frequency transform and inverse transform into the framework, and its performance on irregular time series.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review. Hope our response can address the misunderstandings or concerns
w1.
1. The concepts of "point-wise" and "information bottleneck" are widely recognized in the literature. In fact, we provided an explanation of these terms within the context of time series forecasting in **lines 39-42**. The point-wise nature of MLPs limits MLP-based methods' ability to capture long-range global dependencies. Further, these methods suffer from information bottleneck due to the volatile and redundant local momenta of time series, making them hardly capture accurate time-wise dependencies. This work aims to address these issues by leveraging the advantages of frequency-domain MLPs, i.e., global view and energy compaction (see **Lines 43-56**).
2. FreTS is based on frequency-domain MLPs, obviously differing from MLP- and Transformer-based methods in architecture for capturing long-range dependencies. Intrinsically, Transformer-based methods rely on pair-wise attention mechanism, MLP-based methods rely on point-wise mapping; while ours rely on the global view of frequency techniques and frequency-domain multiplication.
3. On the contrary, "energy compaction" is **discovered** from the experiments. Through experiments (see experimental settings in **Appendix B.4**), we observe that learning in the frequency domain can identify more concentrated diagonal dependencies and patterns than in the time domain (see **Figures 1, 5, 7, and 8**).
W2.
+ FreTS is a frequency-based MLP model for time series forecasting. To provide a comprehensive summary, it is essential to introduce time-domain common models, frequency-based models, and MLP-based models.
+ Correspondingly, the related work is organized from three perspectives: firstly on the time-domain models from classic models to SOTA deep learning models; the second paragraph discusses how existing frequency-based models integrate frequency techniques with neural networks; and the third paragraph describes the representative and SOTA MLP-based models.
This way logically organizes the models relevant to our work with a comprehensive discussion of the previous related literature.
W3.
1. Note that Frequency Temporal Learner performs DFT on the time dimension (**see line 165**) while Frequency Channel Learner is on the channel dimension (**see line 152**). We guess you may ask about the theoretical grounding for Frequency Channel Learner, please refer to the general response about the frequency channel learner.
2. In the frequency domain, the values representing the frequency spectrum are complex numbers. Mathematically, the multiplications in the frequency domain adhere to **the basic rule of complex multiplication**. Please refer to **Appendix C** where we show a calculation example.
W4.
1. Note that we have already conducted extensive experiments to verify our claims regarding the benefits of the proposed approach.
+ To verify the global view of learning in the frequency domain, we performed visualization experiments on the Traffic and Electricity datasets, comparing the weights learned in the time domain with those learned on the frequency spectrum.
The results can be shown in **Figures 9, 10, 11, and 12 in Appendix G.2**, verifying that FreTS has a strong capability to capture the global periodic patterns.
+ To verify energy compaction in the frequency domain, we visualized the weights in the frequency temporal learner on Traffic and Electricity. The results can be reported in **Figures 5, 7, and 8**, exhibiting energy aggregation characteristics with clear diagonal patterns and dependencies.
2. To address your concern regarding the robustness of FreTS towards noise, we add $0.1 \times \mathcal{N}(0,1)$ Gaussian random noise into training data on the ETTh1 dataset. The results shown in the following table support our claim of robustness.
| |without noise | |with noise | |
|:--|:--|:--|:--|:--|
| Metric| MAE| RMSE| MAE| RMSE|
|96|0.061|0.087|0.061|0.087|
|192|0.065|0.091|0.065|0.091|
|336|0.070|0.096|0.070|0.097|
|720|0.082|0.108|0.083|0.109|
W5.
In **Line 94**: "stacked MLP layers" refers to N-BEATS proposes a deep architecture based on backward and forward residual links and a very deep stack of fully-connected layers (**cf. line 2-3 in Abstract of N-BEATS paper**).
In **Line 129-130**: we state "learning in the frequency spectrum helps capture a greater number of periodic patterns" to explain the observation from Figure 1(a), meaning the patterns learned in the frequency domain exhibit more obvious global periodic patterns than in the time domain. We have explained this in **the caption of Figure 1** and in **lines 50-51**. We will carefully polish the phrases.
Q1.
Certainly, both of them are differentiable, please refer to **Equations (1) and (2)**. Our FreTS is an end-to-end forecasting model.
Q2.
Fig. 1 visualizes the learned weights $W \in \mathbb{R}^{d\times d}$ where $d$ is the hidden dimension. The axes are the dimension indices of the weight matrix.
The weights for visualizing Fig. 1 are learned on the Traffic dataset (see **lines 575 and 584 in Appendix**). A detailed description of the visualization settings is **in Appendix B.4**.
Limitation:
1. In **Section 4.3**, we have conducted experiments to evaluate the efficiency of FreTS, covering short-term forecasting scenarios and long-term forecasting scenarios, in comparison with SOTA models. Contrary to your concern, incorporating Fourier transform can improve efficiency (see reasons in our responses to Reviewer WVuT). Also, this advantage has been widely explored and acknowledged in the literature for time series forecasting, such as Autoformer and FEDformer.
2. In this paper, similar to all baseline methods, we only focus on time series forecasting without focusing on irregular time series. Frequency transformation may also help address irregular time series issues, and we leave this as our future work.
We'll clarify the above in the final version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts to clarify the concerns with more detailed explanations. My questions are mostly addressed, and I have raised my score.
---
Reply to Comment 1.1.1:
Title: Thanks for feedback
Comment: Dear Reviewer bjea, we are appreciated to receive your feedback. Many Thanks.
Authors | Summary: This paper investigates time series forecasting in the frequency domain. By utilizing MLPs in the frequency domain, the proposed FreTS effectively captures the patterns of time series with a global view and energy compaction. Frequency learning is applied to both inter-series and intra-series scales, allowing FreTS to capture channel-wise and time-wise dependencies. Extensive empirical experiments demonstrate the effectiveness of FreTS in both short-term and long-term forecasting tasks.
Strengths: 1) This paper redesigns MLPs in the frequency domain to effectively capture both time-wise and channel wise correlations. The use of simple MLPs ensures high efficiency and helps mitigate overfiting issues compared to existing sophisticated deep models.
2) This paper is well-organized, with comprehensive discussion and experiments.
Weaknesses: 1) The authors mention that learning time series in the frequency domain has the nature of energy compaction, as the energy concentrates on the smaller part of frequency components. However, the proposed FreTS still retains all the frequency components when performing MLP in the frequency domain, thereby not fully leveraging this advantageous characteristic.
2) Frequency domain modeling is advantageous for capturing temporal dependencies due to the inherent periodic patterns in time series. However, the suitability of frequency domain modeling for exploring channel-wise correlations requires further discussion and verification. Moreover, conducting channel-wise modeling for each time point is inefficient, and should be performed at a coarser granularity, such as segment-wise or series-wise.
3) The dimension extension block takes an input $X_t \in \mathbb{R}^{N \times L}$ and produces an output $H_t \in \mathbb{R}^{N \times L \times d}$. However, if the number of variables $N$ or the input length $L$ is large, the memory usage of $H_t$ can become very high.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1) Short-term forecasting comparison shown in Table 1 is suggested to include N-BEATS[1] and N-HITS[2] as baselines as well.
2) Figure 4(b) demonstrates that the parameter number of FreTS remains constant as the prediction length increases. However, the projection block described in subsection 3.1 includes learnable parameters $\phi_2 \in \mathbb{R}^{d_h \times \tau}$ and $b_2 \in \mathbb{R}^\tau$, and the number of these parameters will increase with the prediction length $\tau$.
3) From which learner (frequency channel or frequency temporal), and at which layer are the weights used for obtaining the visualizations?
Reference
[1] Oreshkin B N, Carpov D, Chapados N, et al. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting[C]//International Conference on Learning Representations. 2019.
[2] Challu C, Olivares K G, Oreshkin B N, et al. NHITS: Neural Hierarchical Interpolation for Time Series Forecasting[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(6): 6989-6997.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and the positive comments regarding our paper. Below, we address your comments.
**W1. ...still retains all the frequency components..., thereby not fully leveraging this advantageous characteristic.**
We preserve the entire frequency components and feed them into the frequency channel/temporal learners to avoid any information loss, which is necessary and rational. Note that all components, such as high-frequency ones or low-frequency ones, are adaptively highlighted or downweighed according to different data characteristics.
Accordingly, *energy compaction* is achieved by adaptively learning in the frequency domain instead of manually discarding some components; it means that the information is concentrated on a smaller portion of components than in the time domain. This may be attributed to the fact that the frequency spectrum has discriminative frequency components specified for different data characteristics.
**W2**
1. Frequency domain modeling is theoretically and empirically suitable for exploring channel-wise correlations.
+ Theoretically, the frequency channel learner is equivalent to applying global convolutions on the variables for each timestamp (see **Appendix D.2 Proof of Theorem 2**).
+ Empirically, we have supplemented visualization experiments to verify the channel-wise learning capability of our frequency channel learner. For more details, **please refer to the general response about Frequency channel learner**.
2. In many real-world scenarios, such as traffic scenarios, channel-wise dependencies can be time-evolving, even varying at different time points. In such cases, it becomes necessary and beneficial to conduct channel-wise modeling for each time point individually to capture the time-varying channel-wise dependencies. This approach is commonly employed in the literature, like in AGCRN and StemGNN.
Modeling channel-wise dependencies at a segment-wise or series-wise granularity is a promising idea, reminiscent of the successful implementation of series-wise time-wise dependencies in Autoformer. However, this approach overlooks the temporal dynamics within the segment and should integrate time information for comprehensive modeling. Due to time constraints, we were unable to conduct experiments to validate this idea. Nevertheless, we appreciate your constructive suggestion and will take it into account for our future research endeavors.
**W3. ...if the number of variables or the input length is large, the memory usage can become very high.**
First, performing time series embedding is widely adopted in recent MTS forecasting models, e.g., StemGNN and AGCRN. Second, we deliberately choose a smaller hidden dimension, specifically $d=128$, in comparison to Transform-based models that typically employ higher dimensions, such as 1024. As a result, the memory usage of $\mathbf{H}_t$ is perfectly acceptable, which is evidenced by the fact that our FreTS capably works on long-term forecasting on all datasets and achieves the SOTA performance.
In addition, to avoid **extensive parameters** in the dimension extension block, we employ $\mathbf{H}_t=\mathbf{X}_t \times \phi_d$ where $\phi_d \in \mathbb{R}^{1 \times d}$ (see **lines 124-125**). Note that the parameter volume of the embedding matrix of $\phi_d$ is independent of $N$ and $L$. This approach effectively avoids extensive parameters, as demonstrated in Figure 4(a), where the parameter count of FreTS remains unaffected by changes in $N$.
**Q1. Short-term ... comparison ... suggested to include N-BEATS[1] and N-HITS[2] as baselines...**
Thank you for pointing out the two models. In Related Work, we have discussed these models, but we did not compare them with FreTS, considering they are univariate forecasting models. Taking your suggestion, we conducted a comparison between FreTS and the two baselines under the input length and prediction length of 12. The corresponding results are presented below:
| | Solar| |Wiki | |Traffic| |ECG| |Electricity| |COVID-19| |METR-LA| |
|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|
| | MAE| RMSE|MAE | RMSE|MAE| RMSE|MAE| RMSE|MAE| RMSE|MAE|RMSE |MAE| RMSE|
|N-BEATS|0.137|0.182|0.047|0.087|0.016|0.031|0.056|0.088|0.057|0.085|0.162|0.192|0.087|0.166|
|N-HITS|0.129 |0.174 |0.046|0.083|0.018|0.035|0.055|0.086|0.060|0.089|0.155|0.183|0.090|0.173
|FreTS|0.120|0.162|0.041|0.074|0.011|0.023|0.053|0.078|0.050|0.076|0.123|0.167|0.080|0.166|
The results show that FreTS surpasses N-BEATS and N-HITS across all datasets, thereby demonstrating that frequency-domain MLP is more effective than time-domain MLP. We will include these results in the final version.
**Q2. ...the number of these parameters will increase with the prediction length**
Thank you for reading our paper carefully. Indeed, the number of parameters in FreTS increases with the prediction length $\tau$. The raw number of parameters in FreTS for drawing Figure 4(b) is 3.23, 3.26, 3.30, 3.34. However, due to the minimal variation among these data points, the curve exhibits only slight changes.
We will redraw the figure and put values on the figure to show the change more clearly.
**Q3. From which learner ... which layer are the weights used for obtaining the visualizations?**
Note that both the frequency channel learner and temporal learner adopt one layer of FreMLP (see **line 556 in Appendix B.3**), the reason for one layer is explained in **our response to the question W1 by Reviewer aNLj**. To ensure a fair comparison with DLinear, we follow the visualization settings used in DLinear, where we utilized the weights of frequency temporal learner for visualizations, the same as DLinear does. Further details about the visualization settings are in **Appendix B.4**, and we have uploaded the source codes of the visualization method in the supplementary materials for reference.
We'll clarify the above in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses to the initial reviewing comments. Generally, the authors have addressed most of my concerns.
Nevertheless, in the response to W1, it is mentioned that the model can adaptively highlight or downweigh different frequency components. Maybe further clarification is still expected on this point. Generally MLPs are generally shared across different frequency components, but FreMLP only consists of one single layer.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer u93d,
We are appreciated to receive your feedback. We would like to clarify the point you mentioned.
- Note that in FreMLPs, the weights $\mathcal{W}$ are complex numbers that consist of two parts of weights, i.e., $\mathcal{W}=\mathcal{W}_r+j\mathcal{W}_i$, where $\mathcal{W}_r \in \mathbb{R}^{d\times d}$ is the real part and $\mathcal{W}_i \in \mathbb{R}^{d\times d}$ is the imaginary part (see **Definition 1 and Equation (7)**). Correspondingly, the multiplication in the frequency domain is implemented by the separate computation on the real and imaginary parts of frequency components, adhering to the **basic rule of complex multiplication** (see **Equation (7) and Appendix C**).
- According to the convolution theorem [1], the Fourier transform of a convolution between two sequences is equal to the pointwise multiplication of their respective Fourier transforms. This theorem enables us to efficiently conduct convolutions in the frequency domain. In mathematics, the calculation in FreMLP in the frequency domain, i.e., $\mathcal{H}\mathcal{W}$ where the input $\mathcal{H}$ and the weights $\mathcal{W}$ are complex-valued, involves applying a filter of $\mathcal{W}$ over $\mathcal{H}$, which is equivalent to performing a convolution in the time domain (see **Theorem 2**).
- Based on the above, the calculations in FreMLPs exhibit distinctions when compared to general MLPs in the time domain. General MLPs can be regarded as performing transformations using the MLP weights to highlight or downweigh certain features (elements), **while FreMLPs can be regarded as applying filtering in the frequency domain over the frequency spectrum** to highlight or downweigh certain frequency components.
- A tiny illustration example, for a real-valued vector $\mathbf{V}$, multiplying a real number $r$ with $\mathbf{V}$, e.g., $2\mathbf{V}$ with $r=2$, means equally highlighting/downweighing all elements in the vector by $r$; while for a **complex-valued** vector $\mathcal{V}$, multiplying a **complex number** $c$ with $\mathcal{V}$, e.g., $(2+2j)\mathcal{V}$ with $c=(2+2j)$, does not mean equally highlighting/downweighing all elements in the vector. Instead, the multiplication can be **regarded as performing the filter of $c$ onto the frequency spectrum $\mathcal{V}$**.
In summary, although we adopt shared complex number weight matrices in each layer of FreMLP, FreMLP can adaptively learn to highlight some frequency components and downweigh different frequency components during training.
[1]. S. S. Soliman and MD Srinath. Continuous and discrete signals and systems. Prentice Hall, (1990)
- **Reasons for the one-layer setting in FreMLP**: a one-layer FreMLP is powerful enough and empirically performs well to learn channel-wise or time-wise dependencies while stacking multiple FreMLP brings more parameters and may lead to lower learning efficiency.
For more experimental details, please refer to our responses to **W2** and **Q2** of Reviewer aNLj.
Hope we have addressed your concerns. If you have any further questions or concerns, please feel free to let us know.
Authors | Summary: In this paper, the authors investigate the problem of time-series forecasting. Since the frequency domain can preserve the information from a global view and enjoy the advantage of energy compaction, the authors propose the FreTS model, which is composed of the Frequency Channel Learner and the Frequency Temporal Learner. The authors further prove the reasonableness of the frequency-domain MLPs and evaluate the proposed idea on several datasets.
Strengths: 1. The authors investigate the time-series forecasting problem from the frequency domain and provide convincing reasons for the advantage of the frequency domain.
2. The authors devise the frequency channel learner and frequency temporal learner, which address different challenges in a unified framework.
3. The authors investigate the proposed method on several datasets and achieve the ideal performance.
Weaknesses: 1. To capture the dependencies among channels, the authors propose the frequency channel learner, which applies the FreMLP on the variables from each timestamp, e.g. $H_t^:$. However, Since it is strange to consider $H_t^:$ as a time series, I think it might be not a good idea to employ FreMLP to capture these dependencies. In my opinion, I think the CNNs, the attention mechanism, and GNN might be good choices. It is suggested that the author should try different methods to better capture these dependencies.
2. The authors address the frequency-domain MLP from time-series forecasting, but some time-series data might be not seasonal and contain some monotonic tendency, can the proposed method address this problem?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer weaknesses
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please refer weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and the positive comments on our work. We address each of them as follows.
**W1. To capture the dependencies among channels, we propose the frequency channel learner, which applies the FreMLP on the variables from each timestamp, e.g., $\mathbf{H}_t^{:}$. However, since it is strange to consider $\mathbf{H}_t^{:}$ as a time series, I think it might be not a good idea to employ FreMLP to capture these dependencies. In my opinion, I think the CNNs, the attention mechanism, and GNN might be good choices. It is suggested that the author should try different methods to better capture these dependencies.**
A1. Theoretically, the frequency channel learner is equivalent to applying global convolutions on the variables for each timestamp. Empirically, we have performed visualization experiments to verify the channel-wise learning capability of our frequency channel learner.
**Please refer to general response about Frequency channel learner**.
**W2. The authors address the frequency-domain MLP from time-series forecasting, but some time-series data might be not seasonal and contain some monotonic tendency, can the proposed method address this problem?**
A2. Yes, FreTS can learn the monotonic tendency, we have analyzed the reasons and conducted experiments to verify this. **Please refer to the general response about FreTS on non-periodic data**.
We'll clarify the above in the final version and hope that we have addressed all your concerns. Thanks.
---
Rebuttal 2:
Title: Gentle Reminder by AC
Comment: Dear Reviewer,
Could you carefully read the authors' rebuttal as well as the others' reviews and their rebuttal, and make responses at your earliest convenience? The deadline for the discussion phase is fast approaching, which is due Aug 21st 1pm EDT, so your quick responses will be greatly appreciated.
Best,
AC | Summary: The authors argue that MLP-based forecasting methods suffer from point-wise mappings and information bottlenecks and explore an interesting direction of applying MLPs in the frequency domain for time series forecasting. They further analyze the inherent characteristics of frequency-domain MLPs and propose the FreTS model via stacking frequency-domain MLPs for time series forecasting. The paper provides theoretical guarantees and extensive empirical evaluation to analyze the advantages of frequency-domain MLPs and show the superiority of FreTS, verifying the authors’ arguments and the effectiveness of frequency-domain MLPs.
Strengths: 1. The authors study an interesting neural network of frequency-domain MLPs and analyze the advantaged effectiveness of frequency-domain MLPs compared MLPs in the time domain. This potentially inspires the MTS forecasting community to pay more attention to frequency analysis.
2. Frequency-domain MLPs are theoretically proved to be energy compacting and equivalent to global convolutions. FreTS based on frequency-domain MLPs is straightforward in architecture and theoretically sound.
3. The experimental results are comprehensive and impressive. Extensive results on 13 datasets validate the superiority of FreTS and the effectiveness of frequency-domain MLPs for both short-term forecasting and long-term forecasting. Model analysis and efficiency analysis are also provided to investigate the advantages of FreMLP over MLP and the higher efficiency of FreTS over SOTA baselines.
4. The visualization analysis provides quite interesting patterns and clearly shows the distinct characteristics between the time domain (MLPs) and the frequency domain (frequency-domain MLPs). Especially, the visualizations of the learned weights in the frequency domain show highly concentrated values in the diagonal, verifying the energy compacting and learning efficiency of frequency-domain MLPs.
5. The paper is well-organized and easy to follow, and the main contributions are quite clear and solid. In my opinion, it studies an interesting topic and provides a new perspective on incorporating frequency analysis into time series analysis.
Weaknesses: 1. The experimental results are convincing, but only one MLP-based baseline is compared in the experiments. Additional MLP-based baselines help to verify the advantages of FreMLPs over MLPs.
2. The superiority performance of FreTS verifies the effectiveness of (one layer) frequency-domain MLPs, but the authors did not investigate how well multi-layers of frequency-domain MLPs perform in MTS forecasting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The learned patterns on the frequency spectrum show that FreTS can capture more obvious periodic patterns. But when there are no periodic patterns in MTS data, can FreTS still perform well? That is, whether FreTS is suitable for non-periodic MTS data.
2. Why FreTS contains only one layer of FreMLP in either frequency channel learner and frequency temporal learner? How well does FreTS work when stacking with multiple FreMLPs?
===
I have read the rebuttal and would like to keep my rating.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for your constructive comments and suggestions. We provide a point-by-point response to your comments below.
**W1. The experimental results are convincing, but only one MLP-based baseline is compared in the experiments. Additional MLP-based baselines help to verify the advantages of FreMLPs over MLPs.**
A1. Thank you for your suggestion. We have incorporated additional experiments to compare FreTS with two state-of-the-art MLP-based baselines, namely N-BEATS [1] and N-HITS [2], in the context of short-term forecasting. The results of these comparisons are provided below.
| | Solar| |Wiki | |Traffic| |ECG| |Electricity| |COVID-19| |METR-LA| |
|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|:---|
| | MAE| RMSE|MAE | RMSE|MAE| RMSE|MAE| RMSE|MAE| RMSE|MAE|RMSE |MAE| RMSE|
|N-BEATS|0.137|0.182|0.047|0.087|0.016|0.031|0.056|0.088|0.057|0.085|0.162|0.192|0.087|0.166|
|N-HITS|0.129 |0.174 |0.046|0.083|0.018|0.035|0.055|0.086|0.060|0.089|0.155|0.183|0.090|0.173
|FreTS|0.120|0.162|0.041|0.074|0.011|0.023|0.053|0.078|0.050|0.076|0.123|0.167|0.080|0.166|
The results show that FreTS surpasses N-BEATS and N-HITS across all datasets, thereby demonstrating that frequency-domain MLP is more effective than time-domain MLP. We will include these results in the final version.
Additionally, we have conducted a comparison between FreTS and another MLP-based baseline LightTS [3] on long-term forecasting settings. The results of this comparison are presented in the table below:
| Dataset | |ETTh1 | | | | ETTm1| | | |
|:---------|:---|:---|:---|:---|:---|:---|:---|:---|:---|
| Models |Metric |96| 192| 336 | 720|96| 192| 336 | 720|
| LightTS|MAE|0.063 | 0.066| 0.072| 0.085| 0.058| 0.061|0.065 | 0.073|
| |RMSE | 0.090|0.094 |0.099 | 0.112| 0.084| 0.088| 0.092| 0.101|
| FreTS|MAE| 0.061| 0.065| 0.070| 0.082| 0.052| 0.057| 0.062| 0.069|
| | RMSE| 0.087| 0.091| 0.096| 0.108| 0.077| 0.083| 0.089| 0.096|
The results obtained from both the short-term forecasting and long-term forecasting experiments clearly demonstrate that FreTS outperforms the time domain MLP-based baselines. This indicates the superiority of the proposed frequency-domain MLPs over their time-domain counterparts in terms of forecasting performance.
We will add the above MLP-based baselines in the experiment and included the results in the final version.
[1] N-BEATS: Neural basis expansion analysis for interpretable time series forecasting, ICLR. 2019.
[2] NHITS: Neural Hierarchical Interpolation for Time Series Forecasting, AAAI. 2023.
[3] Less Is More: Fast Multivariate Time Series Forecasting with Light Sampling-oriented MLP Structures. arXiv, 2022.
**W2. The superiority performance of FreTS verifies the effectiveness of (one layer) frequency-domain MLPs, but the authors did not investigate how well multi-layers of frequency-domain MLPs perform in MTS forecasting.**
A2. Theoretically, FreMLP shows an inherent characteristic of energy compaction, as shown in **Theorem 1** . It means FreMLP can effectively extract the key patterns while reducing other redundant information, leading to improved generalization and better performance. Intuitively, a one-layer FreMLP is powerful enough and performs well to learn channel-wise or time-wise dependencies, while stacking multiple FreMLP may brings more parameters which reducing learning efficiency. To verify this, we conduct additional experiments on Exchange dataset under the input length of 96 and the prediction length of 96. The results are shown in the following table:
| Metric | one layer | two layers| three layers | four layers|
|:--------|:---------|:---|:---|:---|
| MAE | 0.037 | 0.037| 0.037|0.038|
| RMSE | 0.051 | 0.051|0.051|0.052|
From the results, we can observe that FreTS achieves the same performance when utilizing \{1,2,3\} layer temporal learner. The results validate our claim that a one-layer FreMLP is adequate for constructing the channel learner and temporal learner to enable the acquisition of channel-wise and time-wise dependencies, respectively.
We will supplement the experiments of hyperparameter analysis in the appendix of our final version.
**Q1. The learned patterns on the frequency spectrum show that FreTS can capture more obvious periodic patterns. But when there are no periodic patterns in MTS data, can FreTS still perform well? That is, whether FreTS is suitable for non-periodic MTS data.**
A3. **Please refer to general response about FreTS on non-periodic data**.
**Q2. Why FreTS contains only one layer of FreMLP in either frequency channel learner and frequency temporal learner? How well does FreTS work when stacking with multiple FreMLPs?**
A4. Note that during our experiments, we conducted careful hyperparameter tuning for FreTS. Specifically, we focused on tuning the number of layers in both the frequency channel learner and the frequency temporal learner. we found that using a one-layer channel learner and a one-layer temporal learner yielded the best forecasting performance for FreTS.
In addition, we have provided the corresponding results of FreTS for different numbers of layers, specifically {1, 2, 3, 4}, in the above table in the response to **W2**. From the results, we can observe that FreTS achieves the best performance when adopting one-layer learner.
We'll update the paper to address the above aspects and hope we have addressed your comments.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thanks for the authors' effort in providing the rebuttal. This helped clarify my concerns. I am happy to keep my current rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer aNLj, we are appreciated to receive your feedback. If you have any further questions or concerns, please feel free to let us know.
Authors | Rebuttal 1:
Rebuttal: Dear Reviewers, ACs and the SAC:
We thank you all for the review and valuable comments. We'll clarify them in the final version to address all relevant questions and constructive suggestions.
To address the common concerns regarding our frequency channel learner (Reviewer u93d, Reviewer BMVc, Reviewer bjea) and our model effectiveness on non-periodic data (Reviewer aNLj, Reviewer BMVc), we provide explanations as follows. We also carefully consider each comment of Reviewer bjea and realize that there might be some misunderstandings towards our method. We've tried our best to clarify these misunderstandings in the specific rebuttal.
## General Response about frequency channel learner and non-periodic data
**Frequency Channel Learner**
Theoretically, the frequency channel learner is equivalent to global convolutions on the variables for each timestamp. In addition, we have performed visualization experiments to verify the channel-wise learning capability of our frequency channel learner.
+ Theoretically: According to **Theorem 2**, we know that the operations of frequency-domain MLPs can be viewed as global convolutions in the time domain, i.e., Eq. (9) $\mathcal{H}\mathcal{W}+\mathcal{B}=\mathcal{F}(\mathbf{H}\ast W+B)$. As a result, the frequency channel learner containing frequency-domain MLPs can be regarded as global convolutions (global CNNs) over the variable dimension.
+ Empirically, we have investigated the channel-wise learning capability of our frequency channel learner on the METR-LA dataset.
Specifically, we randomly select 30 detectors and visualize their corresponding adjacency matrix learned by the frequency channel learner via a heatmap. We attach the corresponding heatmap figure in the attached PDF (**see Figure 1 in the attached PDF**). By examining the learned adjacency matrix in conjunction with the actual road map, we can observe that the detectors are very close w.r.t. the physical distance, corresponding to the high values of their correlations with each other in the heatmap. The visualization results demonstrate that the frequency channel learner learns the channel-wise dependencies effectively.
**FreTS on non-periodic data**
Our proposed FreTS is suitable for time-series data that may not be seasonal and may contain some monotonic tendency.
Note that the frequency-domain MLPs compared with time-domain MLPs have two advantages: **global view and energy compaction**.
+ First, the frequency-domain MLPs are equivalent to efficiently performing global convolutions over timestamps (see **Theorem 2**). Accordingly, the frequency temporal learner is able to capture various global temporal information far beyond the periodic information (including periodic patterns, global monotonic tendency, and temporal correlations).
+ In addition, **Theorem 1** implies that, if most of the energy of a time series is concentrated in a small number of frequency components, learning in the frequency spectrum can facilitate preserving clearer/more effective patterns. This characteristic is beneficial for all types of MTS data, regardless of whether the data is periodic or not.
As a result, FreTS is not only suitable for MTS data with periodic patterns, but also for MTS data without periodic patterns.
+ Empirically, our extensive experimental results on different kinds of time series datasets from various applications demonstrate the state-of-the-art performance of our proposed FreTS. The benchmark datasets include Wiki, ECG, and Exchange that do not contain significant seasonal patterns.
+ Moreover, we have provided extensive visualization experiments in **Figures 5, 7, and 8** to investigate the learned weights in the frequency temporal learner. From the results, we observe that the weight coefficients of the real or imaginary part exhibit energy aggregation characteristics (energy compaction). The results on various time series data verify that frequency-domain MLPs can facilitate learning informative features and patterns, which is not limited to time series data with periodic patterns.
+ Furthermore, we have synthesized one multivariate time series dataset (linear: $X=aT+\sigma$ where $a$ is the trend of the variable and $\sigma$ is random noise) and conducted experiments on the dataset. The results under different prediction lengths are visualized in the attached PDF (**see Figures 2, 3, 4, 5, and 6 in the attached PDF**).
From these figures, we can find that FreTS can capture a monotonic tendency.
We hope our response has addressed all concerns. We would greatly appreciate any further constructive comments or discussions.
Pdf: /pdf/7679ce23bd67eda2239c4681f27b8288e5d2ee81.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper studies time series forecasting problem under the deep learning paradigm. Authors propose a new network architecture with MLPs in the frequency domain to capture both inter-series and intra-series correlations.
Strengths: 1. The paper is well-written and easy to follow.
2. Learning the spatio-temporal correlations in the frequency with MLPs seems reasonable and shows lightweight in the experiment part.
3. Experiments with diverse datasets and solid comparison methods demonstrate the effectiveness of the proposed method.
Weaknesses: 1. As mentioned in the related work part, there are already several works in the frequency domain for time series forecasting. It should be clearly discussed what’s the difference of this work.
2. Since the efficiency is a core merit of the proposed method, it would be better to analyze the theoretical computation complexity compared with other method.
3. While the frequency method may improve the efficiency, it is not clear whether these kind of methods would mainly focus on the low-frequency part but ignoring the high-frequency part, which is more important for time series forecasting.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please check the weakness
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and the positive comments regarding our paper. We would like to respond to your comments as follows.
**W1. As mentioned in the related work part, there are already several works in the frequency domain for time series forecasting. It should be clearly discussed what’s the difference of this work.**
A1. Thank you for your suggestion. In the Related Work section, we have presented various existing frequency-based models that incorporate frequency techniques into neural networks. These models include SFM, which combines DFT with LSTM; StemGNN, which incorporates DFT with GNN; and Autoformer and FEDformer, which use DFT with self-attention in the Transformer architecture. These models are considered **frequency-enhanced architectures** as they leverage frequency techniques to improve upon the original architecture, such as Transformer. However, our work differs from these approaches since we propose a **frequency learning architecture** that learns channel-wise and time-wise dependencies in the frequency domain through a novel frequency-domain MLP network.
To clearly distinguish our work from the previous frequency-based models, we will clarify the comprehensive discussion highlighting the key differences and distinctions in the Related Work section.
**W2. Since the efficiency is a core merit of the proposed method, it would be better to analyze the theoretical computation complexity compared with other method.**
A2. In the following table, we analyze the theoretical complexity compared with other representative SOTA models:
|Long-term| |Short-term| |
|:---|:---|:---|:---|
|Model|Complexity|Model|Complexity|
|FEDformer|$\mathcal{O}(L)$ | AGCRN| $\mathcal{O}(N^2)$|
|Autoformer|$\mathcal{O}(L\operatorname{log}L)$|MTGNN|$\mathcal{O}(N^2)$|
|PatchTST| $\mathcal{O}((L/P)^2)$| StemGNN| $\mathcal{O}(N^3)$|
|||FreTS| $\mathcal{O}(N\operatorname{log}N+L\operatorname{log}L)$ |
From the table, we can see that our complexity is log-linear. We will add the theoretical computation complexity to the final version.
**W3. While the frequency method may improve the efficiency, it is not clear whether these kind of methods would mainly focus on the low-frequency part but ignoring the high-frequency part, which is more important for time series forecasting.**
A3. Note that, **after frequency transformation, we do not discard any frequency components nor specifically focus on the low-frequency components but ignoring the high-frequency components**. Instead, we preserve the entire frequency components and feed them into the frequency channel/temporal learners. Note that all components, including high-frequency ones or low-frequency ones, are adaptively highlighted or downweighed specific to different data characteristics.
1. The core operation of our FreTS is the frequency-domain MLP (FreMLP) that performs multiplications in the frequency domain and is equal to global convolutions in the time domain (refer to **Appendix D**). The efficiency of our FreMLP arises from two primary factors:
- First, frequency transformation often leads to a sparse frequency spectrum specific to data characteristics, resulting in a significant portion of the frequency components, including both low-frequency and high-frequency ones, approaching zero. This enables FreTS to adaptively ignore negligible components and highlight significant components based on the characteristics of the data.
- Second, according to the convolution theorem [1], the Fourier transform of a convolution between two sequences is equal to the pointwise multiplication of their respective Fourier transforms. This theorem enables us to efficiently conduct convolutions in the frequency domain, leading to improved computational efficiency.
In summary, both the sparse frequency spectrum and the point-wise product in the frequency domain contribute to the efficient FreTS (FreMLP) model.
2. In addition to efficiency, our FreTS model also benefits from frequency techniques to improve forecasting effectiveness.
- First, the calculation of a frequency spectrum involves summing all signals across time, resulting in each spectrum element in the frequency domain attending to all timestamps in the time domain. This characteristic indicates that a spectrum provides a global view of the entire sequence of time series, which is advantageous for capturing global patterns, such as global periodic patterns, which are crucial for effective time series forecasting. This is demonstrated in **Figures 1(a), 9, 10, 11, and 12**.
- Furthermore, frequency transformation also exhibits a characteristic of *energy compaction*, whereby the essential features of signals are typically represented by a subset of frequency coefficients that have significantly smaller magnitudes compared to the original signal. This characteristic helps reduce data redundancy and facilitates identifying more important features and clearer patterns (as shown in **Figures 1(b), 5, 7, and 8**).
Reference:
[1]. S. S. Soliman and MD Srinath. Continuous and discrete signals and systems. Prentice Hall, (1990)
We'll clarify the above in the final version and hope we have addressed your comments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's responses, which have solved my main concerns. Thus, I would like to raise the score to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer WVuT, we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper. If you have any further questions or concerns, please feel free to let us know.
Authors | null | null | null | null | null | null |
GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels | Accept (poster) | Summary: This paper addresses the problem of evaluating the performance of well-trained Graph Neural Networks (GNNs) on unseen test graphs without ground-truth labels. Traditional evaluation methods that rely on annotated datasets are not applicable in real-world scenarios where test graphs are unlabeled. The paper proposes a two-stage evaluation framework: DiscGraph set construction and GNNEvaluator training and inference. The framework effectively captures the distribution discrepancies between training and test graphs, models graph structural discrepancies, and estimates node classification accuracy without labeled data. Experimental results indicate the success of the proposed method.
Strengths: 1. I really appreciate the practical value of this paper since it often requires a long time for evaluation a model in the real-world scenario. The paper introduces the novel problem of evaluating GNN models on unseen test graphs without labels, which is essential for real-world GNN deployment and serving.
2. The proposed framework first constructs a set of meta-graphs to simulate potential unseen test graphs, captures graph data distribution discrepancies, and models graph structural discrepancies using latent node embeddings and node class predictions. The GNNEvaluator is then trained to estimate node classification accuracy based on the representative DiscGraph set.
3. The paper evaluates the proposed method on real-world unlabeled test graphs and demonstrates its effectiveness in estimating node classification accuracy.
Weaknesses: 1. In section 3.3, the author introduces extracting a seed sub-graph Sseed from the observed training graph S, Nonetheless, the details about how to extract the seed subgraph lack of introduction.
2. The estimation of the represented discrepancy is too simple. I am concerned that not be generalized to larger graph in more domains, especially those larger datasets, e.g., ogbn-arxiv, ogbn-products. Those graphs show more diverse patterns and the proposed method may not work in this scenario.
3. The graph argumentation method is to provide sufficient quantity. Nonetheless, the graph argumentation algorithms seem to only disturb the graph very small. I am wondering how those methods can be generalized to other scenarios with larger gap. Moreover, I am wondering whether it can help to predict the performance on the model trained with those graph argumentation techniques, e.g., dropedge.
4. I am wondering why the GNN evaluator can have the transferability, e.g., the learned GNN evaluator on SAGE can also help to predict the performance of the GCN.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the above weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer ZAqu**
We sincerely appreciate your valuable suggestions and comments on our work, and we are pleased to learn that the practical value of our proposed GNNEvaluator is positively identified by the reviewer. The following are our detailed responses to the reviewer’s thoughtful comments. We are expecting these could be helpful in answering your questions.
**W1: How to extract the seed subgraph**:
The seed subgraph is sampled from the observed graph S. For instance, in our experiments, for the observed ACM dataset, we use its 30% nodes to construct the subgraph as the seed subgraph. Thanks for your suggestion and we will add these details to the final version.
**W2: Generalize to the larger graph in more domains for represented discrepancy estimation**:
For GNN model evaluation, our core idea is to construct a set of discrepancy meta-graphs for modeling complex and diverse graph discrepancies. And in each discrepancy meta-graph, we measure the node-level represented discrepancy by fully leveraging the output node embeddings from well-trained GNN models.
Hence, for larger graphs in more diverse domains (e.g., ogbn-arxiv, ogbn-products), a possible solution is to create more discrepancy meta-graphs to comprehensively cover more diverse patterns with various represented discrepancies. However, it's worth noting that constructing a thorough, extensive, and diverse discrepancy meta-graph set for more complex and larger graph domains is still a challenging task, and we are happy to further explore this point in the future.
**W3-1: The graph argumentation algorithms seem to only disturb the graph very small, and how those methods can be generalized to other scenarios with larger graph gaps**:
We introduce 4 types of graph augmentation methods, and each has 100 random disturbations when the disturbing rate is sampled from a uniform distribution (0,1). For example, the total 400 disturbations could make the GCN’s node classification accuracy varies in a minimum 17.53% to a maximum 91.95% on DBLPv8 dataset. These results illustrate that the graph argumentation methods could disturb the graph in a relatively large range. Hence, it could generalize to other scenarios with larger graph gaps by involving more graph augmentation methods and more diverse disturbing rates.
**W3-2: Whether GNNEvaluator can help to predict the performance on the model trained with those graph argumentation techniques, e.g., dropedge**:
Yes, our proposed GNNEvaluator can be used to predict the performance of the model trained with graph argumentation techniques. That is because, in the model evaluation stage, the well-trained model is fixed and our proposed GNNEvaluator only leverages its output node embeddings and node class predictions. That means, even the model trained with graph argumentation, our proposed GNNEvaluator could still directly leverage its outputs to predict its performance for its evaluation.
**W4: Why the GNN evaluator can have the transferability**:
We would like to clarify the potential misunderstanding here. In our experiments, the “transferability” is reflected in the well-trained GNNs* (e.g, GAT model trained on ACMv9 but used for inference on DBLPv8) *across datasets*. And we use our proposed GNNEvaluator to estimate these well-trained GNNs* transferred node classification accuracy. Hence, our proposed GNNEvaluator should NOT be expected to have the transferability *across models*, since the GNNEvaluator is a model evaluator, so it should be model-specific and driven by different GNN models. If it is learned on SAGE, our whole evaluation process is based on the SAGE’s outputs of node embeddings and class predictions, so that it can only be used for evaluating SAGE, rather than GCN.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for your response. I would like to keep the score since this method is still not of practical use without results on larger datasets.
---
Reply to Comment 1.1.1:
Title: Reply to Reviewer ZAqu's response to our rebuttal
Comment: Dear Reviewer ZAqu,
Thank you for your response to our rebuttal. We are glad our efforts have addressed most of your questions and concerns, and we really appreciate your support for this new research domain of “GNN model evaluation”.
In this very early phase of exploration, we also hold the practical application of GNN model evaluation in high regard. Our dedicated efforts are centered around *shaping this research problem to fit real-world, practical scenarios involving unseen and unlabeled test graphs*. And our GNNEvaluator stands as **the first feasible solution**, serving to enlighten GNN model developers and users about the potential performance of well-trained models.
We firmly believe that within this emerging realm, numerous captivating questions and opportunities await thorough exploration. We're dedicated to further advancing this field, inspiring future research that delves into comprehensive GNN model evaluations across extensive graph data scales and properties! | Summary: The paper presents a novel problem called GNN model evaluation, aiming to assess the performance of a Graph Neural Network (GNN) on unseen graphs without labels. The authors propose a two-stage GNN model evaluation framework, which includes DiscGraph set construction and GNNEvaluator training and inference. The DiscGraph set is designed to capture diverse graph data distribution discrepancies using a discrepancy measurement function that exploits GNN outputs. The GNNEvaluator, composed of a GCN architecture and an accuracy regression layer, learns to estimate node classification accuracy with effective supervision from the DiscGraph set. The method demonstrates effectiveness in evaluating GNN performance on real-world, unlabeled test graphs, achieving low errors compared to ground-truth accuracy.
Strengths: This paper is the first to evaluate GNN performance on out-of-distribution (OOD) graphs, which may inspire future studies. The meta-graph set construction and the three characteristics are intriguing and useful. Additionally, the proposed method demonstrates significant performance improvements over the baselines.
Weaknesses: This paper has a few areas that could be improved upon. I would consider raising the score if the authors addressed the first two points.
1. The paper lacks a discussion of the use cases for the proposed method, considering the prediction error remains relatively high (Please refer to Question 1).
2. The experiments involve only four datasets, and there is a lack of comprehensive study on node classification in different settings (Please refer to Question 2).
3. The paper does not provide ablation studies for the predictor in GNNEvaluator (Please refer to Question 3).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: To enhance the manuscript, the authors may consider addressing the following points:
1. Although the proposed method outperforms the baselines, the prediction error remains high. For example, in ACMv9 -> DBLPv8, the performance of GCN, SAGE, GAT, GIN, and MLP are 45.51%, 45.49%, 44.84%, 50.37%, and 33.28%, respectively, and the evaluation errors are 4.85%, 4.11%, 12.23%, 10.14%, 22.20%, respectively. The relative evaluation errors for various GNNs are quite significant. With such high relative errors, it is challenging for GNNEvaluator to distinguish the performance of any two GNNs. In light of this, what are the practical applications of the proposed method? Including a discussion of compelling real-world applications could make this paper more appealing.
2. While the paper focuses on node classification settings, which is an important application of GNNs, the experiments only evaluate inductive node classification on citation networks. To make the experiments more comprehensive, I suggest that the authors investigate (1) inductive node classification on heterophilous graphs, and (2) transductive node classification.
3. The paper employs a two-layer GCN as a predictor, but detailed ablation studies are missing. For example, (1) what is the variance of the GCN predictor? (2) could a simpler model (e.g., SGC) or a more complex model achieve better performance? It would be beneficial to include such analysis in the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer jBkz**
Thanks for your insightful and constructive review of our work. We especially appreciate your interest in more exploration experiments of our proposed GNNEvaluator, and we are encouraged to know that our efforts on "the meta-graph set construction and the three characteristics" have been recognized. Following are our responses, and we are expecting these could help answer your questions.
**Q1-What are the practical applications of the proposed method?**:
*Recap*: Our proposed GNNEvaluator aims to provide a feasible solution to know the potential performance of existing well-trained GNNs on real-world unseen graphs “without labels”, by directly predicting accuracy. *Application*: The most straightforward practical application is our GNNEvaluator aids in selecting relatively well-performing GNNs from model collections, enabling us to have confidence in their performance on new, unseen, and unlabeled graphs. While relatively high prediction errors in mentioned cases (ACMv9 $\rightarrow$Citationv2, there are 6 classes) in Table 1 of our main submission, we list the rank of different GNN models between the Ground-Truth ACC (%) and our GNNEvaluator predicted ACC (%) in following Table.Re-jBkz-1. We could observe that our predicted ACC could have a consistent ranking of different GNN models even with different prediction errors. In this case, we can still place greater trust in the "GIN" model although there is still a gap between our prediction and the Ground-Truth. Due to the unexplored research field on "GNN model evaluation", we would like to emphasize that our proposed GNNEvaluator achieves this goal by predicting performance on unseen and unlabeled data with classification accuracy. While this is still an early exploration, our experimental results could shed light on the promising potential for practical model selection applications. We are committed to continuous efforts, and hopefully inspiring future research to reduce the prediction error and further explore this research direction.
**Table.Re-jBkz-1. Rank comparison of different GNN models between the Ground-Truth ACC (%) and our GNNEvaluator predicted ACC (%).**
| Models | Prediction error | GT target ACC (%) | Ours predict ACC (%) | Rank-GT-ACC | Rank-Ours-ACC |
| :---:| :---: | :---: | :---: | :---: | :---: |
| GCN | 10.09 | 45.51 | 55.61 | 2 | 2 |
| SAGE | 7.19 | 45.49 | 55.44 | 3 | 3 |
| GAT | 9.11 | 44.84 | 53.94 | 4 | 4 |
| GIN | 6.11 | 50.37 | 56.49 | 1 | 1 |
**Q2: Investigate (1) experiments on heterophilous graphs, and (2) transductive node classification.**
For **(1) experiments on heterophilous graphs**, we have conducted experiments on two typical webpage heterophilous graphs, Cornell and Texas with heterophily degrees of 0.11 and 0.16, respectively. The MAE experimental results are shown in **Table.Re 1 of the response PDF file**, with Cornell$\rightarrow$Texas case and Texas$\rightarrow$Cornell case (the lower, the better). All detailed settings are consistent with our main submission. As can be observed, our proposed GNNEvaluator generally achieves superior performance for all GNN models than other baselines, further demonstrating the effectiveness of our method.
For **(2) transductive node classification**, it means the test nodes can be seen in the model training process by leveraging their neighbor structure information even without using node labels.
However, this setting is not aligned with the proposed GNN model evaluation scenario, where the unlabeled test graphs are usually unseen for real-world practical applications. For instance, consider a citation network, where new paper nodes continually emerge over time. These new nodes are strictly unavailable during the model's training phase, making the transductive setup unfeasible.
**Q3-1: What is the variance of the GCN predictor?**:
In the following Table.Re-jBkz-2, we list the results of Mean Absolute Error (MAE)$\pm$standard deviation (STD) of different GNN models across five runs to illustrate the variance of our proposed GNNEvaluator (GCN predictor). Compared with baseline method AutoEval-G, our proposed method could achieve better MAE results with comparable even better variance with smaller STD on some cases, for example, for Citationv2$\rightarrow$DBLPv8 case on GAT model, we have only 0.76 variance, better than 3.17 variance of AutoEval-G.
**Table.Re-jBkz-2. Mean Absolute Error (MAE)$\pm$standard deviation (STD) of different GNN models for Citationv2$\rightarrow$DBLPv8.**
| Citationv2$\rightarrow$DBLPv8 | GCN | SAGE | GAT | GIN | MLP |
| :--- | :---: | :---: | :---: | :---: | :---: |
| AutoEval-G | 2.57$\pm$2.24 | 16.52$\pm$1.88 | 6.96$\pm$3.17 | 19.20$\pm$15.36 | 32.24$\pm$1.92 |
| GNNEvaluator (**Ours**) | 11.64$\pm$5.35 | 7.02$\pm$1.38 | 5.58$\pm$0.76 | 6.46$\pm$5.17 | 22.87$\pm$2.42 |
**Q3-2: Could a simpler model (e.g., SGC) or a more complex model achieve better performance?**:
We tested the effectiveness of different backbone architectures in terms of GNN predictor in **Table.Re 2 of the response PDF file**. Compared with the GCN-backboned evaluator used in our main submission, we evaluate the simper model SGC-backboned evaluator and a more complex GPRGNN-backboned evaluator with a complex aggregation scheme.
According to the results, the simpler SGC-backboned evaluator fails to achieve satisfactory performance when it is too simple to effectively model complex and diverse graph discrepancies. Although the complex GPRGNN-backboned evaluator could have a relatively good performance in certain SAGE evaluation (although still comparable to our GCN-backboned evaluator), it could not achieve consistently better performance for all GNNs. In contrast, our proposed GCN-backboned evaluator could be a good and general choice with relatively consistent performance, considering the diversity of real-world application scenarios. We will add these discussions to the final version. | Summary: This paper studies the problem of evaluating a GNN model, to estimate the accuracy on unseen unlabeled data for a well trained model. The goal of this paper is not to improve the generalization error of a GNN model, rather estimate the well-trained model error. This is a novel track for model generalization error, where before only studied in Euclidian data. It contains two parts, DiscGraph, that extracts a set of meta-graphs from the original training graph to simulate and capture the discrepancies of diverse graph data distributions for the aforementioned unseen test graphs. In addition, it proposes GNNEvaluator, a model to estimate the trained model accuracy on the unseen test data set.
Strengths: This is a very well written paper, and it is aiming at a new domain of research for GNNs. The proposed idea is very well presented, in addition to detailed analysis of each part of the GNNEvaluator. Due to lack of related work for GNN model, the authors introduced and adapt recent works from recent CNN algorithm. Overall, I found this paper very novel.
Weaknesses: I found the paper very interesting, however, the experimental studies is very limited (understandable due to the novelty of the problem). However it is not very convincing to see the result only on one set of application (for limited number of labeled class).
Also it would be beneficial to see the performance on different architectures (deeper GNN) as right now the GNNEvaluator only uses 2 layers GCN.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Related to the problem mentioned above, how we can be certain about the number of DiscGraph generated for first stage?
- What is the time and memory requirement for these additional stages for GNN training? Given the limited and small evaluated graphs, how scalable is current approach?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer 3iA3**
We sincerely appreciate your thoughtful review of our paper. We are so encouraged by your recognition of the “new domain of research for GNNs” on GNN model evaluation, and this means a lot to advancing the GNN model inference and deployment in real-world applications. We have carefully considered your comments and suggestions, and the following are our detailed responses. We are expecting these could be helpful in answering your questions.
**W1: The experimental studies is very limited (understandable due to the novelty of the problem), and clarification of the results on limited number of labeled class application**:
Thanks for your valuable comments. Due to the complexity and diversification of unobserved and unlabeled test graphs in real-world GNN model evaluation, in our experiments, we have tried our best to involve more experimental cases, for instance, testing 5 typical GNN models on 6 transfer inference cases over 3 datasets, with totally 5$\times$6=30 evaluation scenarios (Table 1-3 in the main submission); as well as ablation studies and more discussion (in Fig. 3-5 of the main submission and Table A1-A8, Fig. A1-A2 in Appendix). We are committed to continuous efforts for more comprehensive future studies to advance and explore this new research topic of GNN model evaluation.
For the label class setting, we assume the observed training graph and the potential unseen and unlabeled test graph has the same number of label classes, for instance, both C-classes. This setting could align with the ability of current well-trained GNN models to be evaluated. When the to-be-evaluated GNN model is well-trained with C-classes on the observed training graph, it can not be directly used for inferring the unlabeled and unseen test graphs with unknown label classes (for example, C+1 classes) since its model architecture has only C-class prediction outputs.
We appreciate your insightful comments, and we believe this is a very interesting question.
As mentioned at the end of our main submission (Lines 359-361), we are happy to further explore this in the future.
**W2: It would be beneficial to see the performance on different architectures (deeper GNN) as right now the GNNEvaluator only uses 2 layers of GCN**:
As shown in following Table.Re-3iA3-1, we compared the results of GCN-Predictor (3-layers) with GCN-Predictor (**2-layers, ours**) on DBLPv8$\rightarrow$ACMv9 (D$\rightarrow$A) and DBLPv8$\rightarrow$Citationv2(D$\rightarrow$C) on GCN, SAGE, and GAT evaluations. As can be observed, the 2-layer setting in our main submission still achieves generally better performance compared with 3-layer GCN evaluator.
The reason behind this might be that the deeper GNN might have an over-smoothing issue when the node features might be too similar to discriminate. Thanks for your thoughtful suggestions and we will add this to the final version.
**Table.Re-3iA3-1. Mean Absolute Error (MAE) performance on different GNNEvaluator layers (the lower, the better).**
| Datasets | | D$\rightarrow$A | | |
| :--- | :--- | :--- | :--- | :--- |
| Models | GCN | SAGE | GAT | *Avg.* |
| GCN-Predictor (3-layers) | 4.85 | 11.89 | **5.65** |7.46 |
| GCN-Predictor (**2-layers, ours**) | **2.46** | **10.27** | 6.94 | **6.56** |
| **Datasets** | | **D$\rightarrow$C** | | |
| Models | GCN | SAGE | GAT | *Avg.* |
| GCN-Predictor (3-layers) | 17.57 | 11.97 | 5.44 |11.66 |
| GCN-Predictor (**2-layers, ours**) | **11.68** | **7.83** | **3.97** | **7.83** |
**Q1: How we can be certain about the number of DiscGraph generated for first stage?**:
At the first stage, we can not be very certain about the number of DiscGraphs, we can only study the effects by empirical experiments as mentioned in Sec. 4.5 of our main submission. As can be observed, the results slightly change with the number of DiscGraph, but not too sensitive over certain numbers (over 200) on average.
**Q2-1: What is the time and memory requirement for these additional stages for GNN training?**:
We test the run time and memory usage on a single NVIDIA GeForce RTX 3080 GPU on GCN model evaluation that is well trained on DBLPv8 dataset, and the running time is only 44.60 seconds per 10 training epochs and the overall dynamic memory usage is only 1990MB. Thanks for your valuable questions, and we will add this to our final version.
**Q2-2 Given the limited and small evaluated graphs, how scalable is current approach?**:
The largest graph (ACMv9) used in our experiments contains 7k nodes with 11k edges, and it can still be modeled with the 2-layer GCN evaluator used in our main submission.
For the larger scale of unseen test graphs in the real world, we might consider using graph sampling techniques or changing the backbone of the GNN evaluator to make it adapt to super-large graphs, such as GraphSage backboned evaluator, or GraphSAINT backboned evaluator. Thanks for your insightful question and we are happy to explore more on this point in the future. | Summary: This work proposes a new research problem named GNN model evaluation, to evaluate well-trained GNNs on observed training graphs for testing on real-world unobserved test graphs without labels. To achieve this goal, this work (1)constructs a DiscGraph set to model the distribution differences of graph datasets, and (1) designs a GNNEvaluator to learn on the constructed DiscGraph set and directly output the overall node classification accuracy by a regression model.
Experimental results on some shifted-distribution graph datasets verify the proposed method and questions' effectiveness.
Strengths: (1) This work first defines the GNN model evaluation problem, which is a very practical problem in real-world gnn model deployment and selection for user sides. This work has good originality and explores a new research problem for real-world GNN deployment and applications. It could inspire some interesting explorations in terms of GNN model service and deployment.
(2) The challenges of the GNN model evaluation, the corresponding technical solutions proposed by this work and experimental settings are clearly clarified. And the DiscGraph set constructed for capturing the graph dataset discrepancy is novel with sufficient quantity, represented discrepancy, and known accuracy. The calculated distance of two graph dataset distribution as the node attributes sound rational.
And a new trained GNN regressor on this constructed DiscGraph set for predicting GNN's classification result over the whole dataset is reasonable.
(3) The experimental setting could align with the real-world application scenario, and the result parts could support this work's claims. Appropriate ablation study, DiscGraph set analysis, and hyper-parameter analysis provides adequate information for verifying the method's effectiveness with small errors from the ground truth accuracy.
(4) This work is well-structured and clearly written with logic.
Weaknesses: See Questions
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: (1) How to obtain the seed graph for augmentation to generate the meta-graph? it would be better to show the difference between the observed graph used for training and the created meta-graph for constructing the DiscGraph set.
(2) Do the results in each table (tables 1 2 and 3), share the same trained models? for instance, in table 2, a gcn model is well-trained on citationv2 but evaluated the same model on both acmv9 and dblpv8?
(3) The authors should provide more experimental analysis and discussions to demonstrate why such a method design could benefit gnn model evaluation compared with other baselines. Besides, there might be a typo in table 1 first six columns? should be acmv9 to citationv2 dataset?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer nFGD**
We sincerely appreciate your thoughtful review of our paper. We are glad to hear that you recognize the significance of GNN model evaluation problem proposed by our work. We have carefully considered your comments and suggestions, and the following are our detailed responses. We are expecting these could be helpful in answering your questions.
**Q1-1: How to extract the seed subgraph**:
The seed subgraph is sampled from the observed graph S. For instance, in our experiments, for the observed ACM dataset, we use its 30% nodes to construct the subgraph as the seed subgraph. Thanks for your suggestion and we will add these details to the final version.
**Q1-2: Show the difference between the observed graph and the constructed meta-graph**:
As mentioned in Sec.E.2 and Fig. A1 of our Appendix, we have provided the visualizations on discrepancy node attributes in the proposed DiscGraph set for GAT, GraphSAGE, GIN model evaluations. As can be observed, for different GNNs, the node attributes in the proposed DiscGraph set show significant differences, denoting that our proposed discrepancy measurement function could effectively capture model-specific discrepancy representations effectively.
**Q2: Do the results in each table (tables 1 2 and 3), share the same trained models**:
Yes, the results in each table (tables 1 2 and 3) share the same trained models. For the GNN model evaluation problem, the well-trained GNN models on observed training graphs should be fixed on the stage of unlabeled real-world graph inference. Hence, a GCN model well-trained on Citationv2 is evaluated the same model on both ACMv9 and DBLPv8.
**Q3: More experimental analysis and discussions to demonstrate why such a method design could benefit GNN model evaluation compared with other baselines**:
The good performance of our proposed GNNEvaluator that benefits GNN model evaluation could attribute to the following reasons according to our experimental results: (1) compared with other methods, we comprehensively simulate and capture the discrepancies of diverse graph data distributions within the constructed DiscGraph set (Results in Fig.3, 4, 5 in the main submission); (2) we design a GNNEvaluator to directly and effectively estimate node classification accuracy of unseen and unlabeled real-world graphs (Results in Table 1, 2, 3 in the main submission). Thanks for your valuable suggestions and we will add more discussions and analysis in the final version. In the first six columns of Table.1, it should be acmv9 to citationv2 dataset, and we will correct this typo in the final version. | Rebuttal 1:
Rebuttal: **Common response to all reviewers**:
We thank all reviewers for their thorough review and valuable suggestions. We are delighted that our contributions have been positively acknowledged, including:
**(1) Novel research question for new domain exploration of GNN model evaluation problem ( @All Reviewers!)**;
**(2) Practical application value for real-world GNN deployment and serving (Reviewer nFGD, Reviewer ZAqu)**;
**(3) Novel, intriguing, and useful discrepancy meta-graph set construction (Reviewer nFGD, Reviewer jBkz, Reviewer ZAqu)**;
**(4) Reasonable GNNevaluator design (Reviewer nFGD, Reviewer ZAqu)**;
**(5) The experimental setting aligns with the real-world application scenario (Reviewer nFGD), thorough analysis and well-support results (Reviewer nFGD, Reviewer 3iA3, Reviewer ZAqu), significant performance improvements (Reviewer jBkz).**
We greatly appreciate the positive comments on our work. These comments encourage us to continue our efforts in advancing this very young research area of GNN model evaluation. We are expecting this work can be beneficial to advance the GNN model inference and deployment in real-world applications.
More detailed responses are as follows. We hope our responses address all weaknesses and questions! Please let us know if there is any concern. We have considered your valuable suggestions, and have modified accordingly to improve the manuscript in the final version.
Pdf: /pdf/fad5f3831b5706b09f8408bb18e151ed7cbae8c7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Maximum Independent Set: Self-Training through Dynamic Programming | Accept (poster) | Summary: Maximum Independent Set:
A Dynamic programming approach (DP) to GNNs. The idea is to solve a Combinatorial Optimization problem, in particular the Maximum Independent Set (MIS): max set of nodes so that no two nodes are neighbors (set of nodes not linked by edges). The problem is NP-Hard but a GNN can generate an approximation. DP partitions the graph in subgraphs. The main idea is to avoid the exponential space.
Theorem 1 enables the recursive search. The idea is to remove nodes recursively or their neighbors. Basically, if a vertex is in the MIS, its neighbors cannot be!
GNNs for MIS. The approach is incremental.
A) Build comparator CMP based on theorem 1, for determining whether to graphs have approximately MIS size.
B) Implement CMP using GNNs. Initially, we have neighbors and anti-neighbors (as the degree of freedom). However, the initial embedding is ZERO for ALL of them. The new embedding for a node comes by concatenating that of its neighbors with that of the anti-neighbors. Later on, when this information is pooled to form a global embedding and fed to an MLP, its decision will be sent backward. This is explained in section 5.
Self-Learning using CMP. Since the problem is NP-Hard it is reasonable to use the randomized CMP as a provider of examples. In this regard, Theorem 2 guarantees that any consistent graph comparing function CMP (the expectation of the randomized CMP is greater for the graph it has to be) induces an optimal algorithm ACMP for MIS. The basic idea is to choose to parameterize CMP_theta in such a way that it is consistent. This is exactly the GOAL OF TRAINING.
Experiments. Compare the proposal with greedy methods. Usually greedy methods works very well but in some graphs. The method seems to generalize a bit.
I agree with the conclusions: “We firmly believe that a thorough investigation into the interplay between Dynamic Programming and self-training techniques can pave the way for
new deep-learning-oriented approaches for demanding CO problems”.
Strengths: A nice paper for illustrating the use of GNNs in CO problems through self learning, specially in NP-Hard Problems.
Nice time complexity for a fixed budget.
Weaknesses: Despite the framework is nice, there is no insight in terms of verifying whether the resulting embeddings of the nodes verify the decision taken. A probable approach is to hash the compatible nodes to one code and the incompatible one to another code. In this way the method will be more explanatory and will enlarge the consistency of the response.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How do the parameters of the CMP_theta evolve? How far/close are them of resturning consistent/inconsistent responses? In other words, use the analysis of these parameters to diagnose how hard is the problem for different graphs.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Maybe better GNNs could improve the results. Agree with the authors the lack of convergence analysis. In this regard, it seems to me that making the CMP_theta more explainable will help.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer v2kw for their feedback. We address their concerns below:
> Q: “How do the parameters of the $CMP_{\theta}$ evolve? How far/close are they returning consistent/inconsistent responses? In other words, use the analysis of these parameters to diagnose how hard the problem is for different graphs.”
We are thankful to the reviewer for the suggestion. We analyze the GNN decisions by calculating the consistency value which is the percentage of the graphs pairs at which the equation of Definiton 3 holds (consistency). Fig. 1 in the pdf shows the consistency values as training proceeds. Overall, the **consistency curves have an increasing behavior, indicating an increase in the comparator consistency as training goes on**.
We are happy to address any other concerns the reviewer v2kw might have.
---
Rebuttal Comment 1.1:
Title: Are there additional comments from Reviewer v2kw?
Comment: Dear reviewer v2kw,
we hope that our response covers the key questions of consistency from the original review. We are **open to your suggestions**, since the summary indicates a clear understanding of our work. If you have any additional comments or questions, we are happy to address them.
Best regards,
Authors | Summary: This paper studies the combination optimization problem, maximum independent set, via self-training. By training a consistent graph comparator function to determine the larger MIS of two different graphs, the MIS of original graph can be obtained recursively efficiently.
Strengths: 1. The notations and presentation are clear.
2. The idea of using the intermediate results during algorithm execution to supervise the model itself is novel.
Weaknesses: 1. The most important component in the proposed method is the consistent graph comparator. It would be better to include more detailed explanation on why the learned graph comparator will be consistent.
2. As the algorithm $A$ depends on $CMP_\theta$, and the whole training pipeline relies on $A$, what will happen if $CMP_\theta$ makes a lot of mistakes initially? This should be discussed.
3. The computational complexity/overhead should be discussed.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See in weaknesses.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are mentioned. Negative societal impact is not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer T1ds for their feedback. We address their concerns below:
> Q: Include a more detailed explanation on why the learned graph comparator will be consistent.
Notice that the optimization problem of Eq. 2 effectively asks for more and more consistent comparators.
Nevertheless, to provide an empirical demonstration, we opted to conduct the following experiment. We are thankful to the reviewer for the suggestion. We analyze the GNN decisions by calculating the consistency value which is the percentage of the graphs pairs at which the equation of Definiton 3 holds (consistency). Fig. 1 in the pdf shows the consistency values as training proceeds. Overall, the **consistency curves have an increasing behavior, indicating an increase in the comparator consistency as training goes on**.
We are happy to address any other concerns the reviewer T1ds might have.
---
Rebuttal Comment 1.1:
Title: Are there any last concerns from Reviewer T1ds?
Comment: We are happy to address any last concerns the reviewer T1ds might have. Beyond the consistency, let us further clarify some of the points raised by the reviewer:
2. Naturally, the untrained comparator makes a lot of mistakes in the beginning. These mistakes are also indicated in the poor performance of the untrained comparator presented in Table 1 (in the paper). However, through additional roll-outs in the produced sub-graphs during the execution of Algorithm~1, the comparator is able to realize its past mistakes and update the weights that lead to fewer and fewer mistakes. We believe that the latter is nicely depicted in the additional plots that we have provided depicting the increasing behavior of the comparator (see Figure in the pdf). *We remark that being more consistent implies fewer mistakes*. In the revised version of our work we will incorporate the above valuable discussion.
3. We are thankful to the reviewer for raising this point. The complexity of our inference algorithm is in the worst case $O(n^3)$. In the revised version of our work we will explicitly mention the above (see also our response [b8RC](https://openreview.net/forum?id=igE3Zbxvws¬eId=zIK0d3OVCR) and the response to [qbTV](https://openreview.net/forum?id=igE3Zbxvws¬eId=LaaSZFxI4a)).
If the reviewer has any remaining concerns, we would be happy to address them.
---
Rebuttal 2:
Title: response
Comment: Thank the authors' explanation. I raise my score from 5 to 6.
---
Rebuttal Comment 2.1:
Title: Thankful to Reviewer T1ds for raising their score
Comment: Dear Reviewer T1ds,
we are thankful for recognizing the effort for our explanations and increasing the score.
Best regards,
Authors | Summary: This work uses a dynamic programming framework for approximately solving the maximum independent set (MIS) and minimum vertex cover (MVC) problems. A graph neural network (GNN) is used to replace a heuristic step in the randomized version of the DP algorithm. Essentially, MIS (and MVC) can be broken down into (2^n) recursive decisions about whether to include a vertex in the output set or not. The randomized version doesn’t go through all of these, but decides for a random vertex whether or not to include it. This setup ensures that the output will be a valid independent set regardless of the decisions. In this paper, the decisions are made by a trained GNN (heuristic). The GNN is trained by comparing the output IS with random independent sets formed by performing rollouts starting with the alternative choice. E.g., if the GNN decides to include vertex v in the IS, then the rollouts are performed on the graph without v. The greedy rollout is included in the mixed version, to potentially improve this training signal. The GNN architecture used is referred to as GEM, but as far as I can tell, the GNN is exchangeable for any other GNN architecture.
Strengths: 1. This is a neat setup for DP problems on graphs. The framework ensures that the outputs are valid, and using a GNN as a heuristic can make sense. It's a nice idea to replace a heuristic step in an algorithm with a GNN (but this is not the first work to do so).
2. The approach is clearly laid out and easy to follow.
3. Some of the results are promising. The GNN also clearly outperforms the random CMP, indicating that the GNN is learning something useful and contributing positively to the algorithm.
Weaknesses: 1. It is not clear how much impact this paper might have. I’m sure the framework can be used for some other CO problems where the output is a subset of the input, but it is not clear how widely applicable it might be, especially given its runtime complexity.
2. The main novelty seems to be the use of a GNN for the decision step in the algorithm.
3. GEM is given a new name, but it is essentially fully-connected message passing, where neighbours and non-neighbours are aggregated separately. This is not a novel contribution, so I don’t think it deserves a new name.
4. The proposed model is not scalable to larger graphs. Already the GNN used (GEM) is n^2 per layer (n being the number of nodes), which simply doesn’t scale to large graphs. Moreover it is applied n times at inference, giving n^3 runtime. What are the largest graphs you can run on? How does the complexity compare with the complexity of the other approaches?
5. Greedy baseline outperforms the proposed approach on 2/4 datasets.
6. The results are not very convincing for MVC.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Have you tried using any other GNN architectures (GIN, GAT, Erdos, Graph Transformer, ...) for the heuristic step?
2. Are the anti-neighbors needed? How is the performance without (this would help in lowering the overall complexity)?
3. How many rollouts are needed for the results?
4. Why not use a GNN to decide on the next vertex to consider rather than doing it randomly? There might be some vertices that are easier to decide whether to include at a given time.
5. Why is SPECIAL missing from Table 2?
6. Is RB or TWITTER really out-of-distribution for COLLAB? Maybe those datasets are very similar in distribution.
7. What about using only the greedy rollout?
8. Some symmetry breaking is probably needed for the greedy approach (e.g. if there are (neighboring) nodes with the lowest degree in the graph), so multiple rollouts could be done here and the max taken. Do you consider only a single (random) rollout? Or do you run greedy multiple times?
9. The random CMP could be replaced by a degree-weighted random CMP, i.e. selecting lower degree nodes with a higher probability, but not necessarily taking a lowest degree node. This would essentially make the greedy approach a little more robust to “special” graphs, maybe providing a better baseline.
10. Do you have any insights into the GNN decisions?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer b8RC for their feedback. We address their concerns below:
> W1: Unclear impact of the paper and applicability to other CO problems
We agree with the reviewer that extending to other CO problems is not trivial, but we respectfully disagree on the significance of our work. The main motivation of our work is to investigate Dynamic Programming, a well-established technique, in combination with deep learning models. We believe that our results clearly indicate how fertile this coupling is and is of interest for the ‘learning to optimize’ audience of NeurIPS. This is also clearly identified by R. v2kw.
____
> W2: No novelty on GEM
We are not aware of any previous work using the latter aggregation mechanism. In case the reviewer can provide us with the specific reference we are happy to provide the due credits.
____
> W3: Scalability to larger graphs
The time complexity of our algorithm is $O(n^3)$ which indeed poses limitations on applying it to very large graphs . However we remark that it requires similar (or even better) running time of previous DNN approaches (see Table 1 and 2 where s/g stands for seconds per graph). We emphasize that this runtime is even obtained with code written on PyTorch modules and we firmly hold that further implementing our model with low-level libraries could further boost the runtime performance. For example, at each step, the adjacency matrix of the graph must be modified by removing either the selected node or its neighbors. This is an expensive operation when using NetworkX (which is a standard library). Furthermore, by batching graphs in order to allow for GPU-based parallelization, which is not currently done, we expect to significantly improve training and inference results of the model.
____
> W4: Performance when compared to the greedy baseline
Beating the greedy heuristic for MIS with deep learning methods that do not use any domain knowledge on the problem is a challenging task. We remark that the latter holds for all previous learning approaches out of which our method presents the best performance. We believe that the latter does neither lower the value of our work nor the value of the learning to optimize line of research, that it is a way more recent research direction with respect to the human-design heuristics.
____
> W5: Performance on MVC
The focus of this work is on MIS (which is already declared in the title of the paper). Nevertheless, we make an effort to demonstrate the extension of our work to other CO tasks, such as MVC. We do believe that with an improved architecture (e.g. as in the next question), we could further improve improvements in the MVC, but even our approach from MIS readily achieves a decent performance in MVC.
____
> Q1: Can you use other GNN architectures?
Yes, but we have not tried in detail, since the focus of this work is not the GNN architecture per se. We believe this is a future direction for our work, which we will explicitly mention in the final version.
____
> Q2: Role of anti-neighbors
We appreciate the recommendation from the reviewer and agree about the comment regarding the computational complexity. Inspired by this, we conducted the related experiment and verified that the complexity without the anti-neighbors is lower, but the performance deteriorates. The results in Table 1 (in the pdf) indicate how the performance without the anti-neighbors (AN) deteriorates. We will include the experiment in the revised version.
____
> Q3: How many rollouts are needed for the results?
Table 3 (in the pdf) reports the approximation ratio for different rollout values. Notice that the optimal rollouts for both COLLAB and TWITTER belong in the range of 5 or 10.
____
> Q4: Why not use a GNN to decide on the next vertex to consider rather than doing it randomly? There might be some vertices that are easier to decide whether to include at a given time.
Indeed, it is a reasonable tweak that we have not considered since we wanted to investigate the performance of the vanilla version of the DP-based learning approach. We will explicitly mention this in the final version as a future step.
____
> Q5: Why is SPECIAL missing from Table 2?
We notice that RB and SPECIAL distributions share common patterns. For instance, they have big clusters of fully-connected nodes, but clusters are almost independent. Therefore, this is adding redundant information.
____
> Q6: Is RB or TWITTER really out-of-distribution for COLLAB?
The graph distributions between the three datasets are quite different. Please check Table 2 in the pdf.
____
> Q7: What about using only the greedy rollout?
It achieves similar performance.
____
> Q8: Do you run greedy multiple times or a single rollout?
We run greedy multiple times exactly as the reviewer suggests.
____
> Q9: Degree-weighted random CMP baseline
We tried the recommendation of the reviewer, but, unfortunately, it did not produce a good result. The result of the pure random CMP over the SPECIAL graphs is 0.225 +- 0.279 while the result of the degree-weighted random CMP is 0.172 +- 0.192. Even if the chosen node is not the lowest degree one, the comparator has to make a choice. This choice is wrong 50% of the times, since the comparator is random, producing a low approximation ratio.
____
> Q10: Insights into the GNN decisions
We analyze the GNN decisions by calculating the consistency value which is the percentage of the graphs pairs at which the equation of Definiton 3 holds (consistency). Fig. 1 in the pdf shows the consistency values as training proceeds. Overall, the **consistency curves have an increasing behavior, indicating an increase in the comparator consistency as training goes on**.
We are happy to address any other concerns the reviewer b8RC might have.
---
Rebuttal Comment 1.1:
Title: Are there any remaining questions from reviewer b8RC?
Comment: Dear reviewer b8RC,
We appreciate your constructive feedback and questions. We are wondering whether there are any additional questions that we might be able to answer for the reviewer.
The key questions so far concern both the *methodology*, e.g., requirement for the anti-neighbors, and the *experimental setup and results*, e.g., the the difference between the datasets (please check Table 2 in the provided single-page pdf for a detailed answer), or the insights for the GNN decisions.
Please let us know if you have any remaining questions. We are happy to answer them. If you are satisfied with our responses, we would be grateful if you could re-evaluate your score.
Best regards,
Authors | Summary: This paper proposes a method to solve the maximum independent set (MIS) problem using self-trained dynamic programming and carefully-designed GNNs. The MIS problem is decomposed into dynamic programs and solved by comparing two reduced graphs.
Strengths:
It is interesting and innovative to address the MIS problem using GNNs in a self-training manner.
The proposed method is effective and efficient for solving the MIS problem.
Weaknesses:
The following improvements could enhance the experiments:
1. Baseline comparisons: In Line 74, several recent baselines are mentioned. It would be beneficial to include comparisons with these methods in the experiments to demonstrate the superiority of the proposed approach.
2. Evaluations on large-scale graphs: Existing methods [Schuetz et al., 2022b, Wang and Li, 2023] have demonstrated their effectiveness on larger-scale graphs with more than $10^4$ nodes. It would be valuable to showcase the results of the proposed method on such graphs.
3. Comparisons on model training: Since the proposed method employs self-training, it would be informative to compare the training cost of the proposed method with that of the baselines. Including such comparisons would provide insights into the efficiency of the proposed approach.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: None
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please check the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer YVwa for their feedback. We address their concerns below:
> Q1: Baseline comparisons
In Line 74, we present the various DNN approaches for various CO settings and adapting those to MIS might not be trivial. In our experimental evaluations we compare with the existing approaches for MIS. If the reviewer believes there is a relevant baseline that we should include in the comparisons, we are happy to consider it.
____
> Q2: Evaluations on large-scale graphs
We are thankful to the reviewer for the remark and the references. Let us clarify why the presented methods are not directly comparable with ours and the other deep-learning approaches that we compare against.
Firstly, we remark that [Schuetz et al., 2022; Wang and Li, 2023] present only results for $d$-regular graphs for some small values of $d$ (3 and 5). Despite the fact that these graphs admit a large number of nodes, the number of edges scales only linearly with $n$ which is the reason why their methods can scale. This is a different setting to ours, where we do not make such assumptions about the graph distribution. Furthermore, notice that both [Schuetz et al., 2022; Wang and Li, 2023] use GNNs meaning that the inference algorithm admits at least $O(n^2)$ complexity for dense graphs.
What is more, let us clarify why the learning of these approaches also differs from the aforementioned deep-learning approaches. [Schuetz et al., 2022] optimize the parameters of the GNN providing a fractional MIS-solution and then use a standard randomized rounding to convert to an integral one. However, they *train* the GNN for each new graph separately during inference (or in other words, the parameter optimization of the GNN is part of the inference algorithm). The latter pipeline is very similar to the classical *relax and round* approach that has been used in the approximation algorithms for years (see section 4,5 and 6 in [1]). On the other hand [Wang and Li, 2023] do not derive a new learning algorithm but rather use their approach to tune the classical *greedy algorithm* whose practical favorable performance has been identified over the years.
[1] The Design of Approximation Algorithms, Williamson 2009
____
> Q3: Model training timing
We are thankful for the recommendation by the reviewer. We ran the training process on the TWITTER dataset for 1000 iterations. Concretely, given the constructed dataset, we measured the time it required for forward + backward passes with our model and the EGN model from [1]. The results in the table below showcase that our method is quicker for this dataset.
| Model | Twitter (seconds/graph) |
|-------|-------------------------|
| Ours | 0.45 |
| EGN | 1.62 |
Although the TWITTER dataset does not contain graphs with thousands of nodes, our method should scale with a similar rate to that of prior deep learning approaches for dense graphs.
We are happy to address any other concerns the reviewer YVwa might have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and my questions have been adequately addressed. At this point, I intend to maintain my current score.
---
Reply to Comment 1.1.1:
Title: We appreciate your feedback and questions
Comment: Dear Reviewer YVwa,
we are thankful for your feedback and strong support for our work. We welcome any additional recommendations to improve our work.
Best regards,
Authors | Rebuttal 1:
Rebuttal: Dear reviewers,
We are thankful for your time and effort to handle the paper. Please find enclosed the single-page pdf. We respond to each question of the reviewers below.
Pdf: /pdf/a220a545a16b0db3ad66aefb9b9d166da521c6e4.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes a GNN based method to approximate the largest independent set for a graph. The algorithm proposes to train a GNN layer to decide at each iteration whether a randomly selected node should be a part of the independent set, the resulting graph (either losing the neighbours of the chosen node or only the node the itself) is then the input at the next iteration. For graph distribution datasets are used for empirical evaluation.
Strengths: The paper employs a DNN module at an appropriate place, i.e. as the graph comparator function, which is sensible. This would be a reasonable contribution if the weaknesses would be addressed.
Weaknesses: EDIT: Summary: Some of the weaknesses mentioned were incorrect (W1,W2,W4) (apologies to the authors). Some of the weaknesses simply need to be discussed in the paper (W3,W5,W6,W7,W9). In my opinion the deciding factor should be W8, does the AC think that simply using benchmarks with no real-world connection to the problem we are trying solve are good empirical evaluation (only used because prior work did and nobody checked that) is good enough. If Yes, the paper should clearly be accepted (score of 7), else it should be rejected as the accepted benchmark should be updated. Thus, I have updated my score to a 5, 7 if the authors decide to add a benchmark with a real world connection or explain why one of the current benchmarks has a real-world connection and thus represents the distribution of graphs one might find in practice.
1. I retract the first point in the bullet list below, it is indeed incorrect as the authors point out in the rebuttal.
2. In the second bullet point, I also did forget a word ("might") that does matter, however, I regarding remark 4 I maintain my criticism, while the interpretation that the authors provide in the rebuttal makes sense what is written in the paper is much more broad. I suggest they reword for the camera ready.
3. Regarding point 3, I thank the authors for addressing the computational complexity more directly, but the comparison in run-time to Gurobi seems unfair as that tries to find the optimal solution as far as I understood, which is naturally exponential in the worst case, so this seems to be an apple to oranges comparison, the greedy heuristic may be a better comparison point.
4. I also retract W4.
5. It is a weakness that the paper doesn't discuss this in it's current form.
6. I stand by W6 given the current phrasing in the paper which is "Since we observe unexpected performance from RUN-CSP on the COLLAB and RB269 datasets, we have omitted those results from the table." This doesn't explain why it has been removed, which is necesary.
7. Units should still be explained. One prior work doesn't make it standard enough to not mention it.
8. Prior work having used the benchmarks is in my opinion a weak argument without any other benchmarks. But I understand that this is a contentious issue within the ML community. I leave this up the AC, but as far as I can tell from the response of the authors, there is no particular use in the real-world to compute the MIS on something like the Twitter dataset.
9. I agree that such work is still valuable, but it's an obvious limitation that needs to be discussed and wasn't in the paper.
The paper has substantial weaknesses that prevent acceptance at this time in my view:
1. The authors demonstrated knowledge of what NP-hard means is poor, frequently mis-using the term and similarly the context of approximation algorithms is missing completely. In more detail: NP is a class of BINARY decision problems, i.e. the problem asks a yes or no question. Maximum Independent Set (Defintion 1) as defined in the problem is not such a problem and thus cannot be classified as NP-hard or anywhere in that complexity hierarchy. Leading to several incorrect claims in the paper (e.g. line 181). Furthermore, problems that are NP-hard are considered computationally infeasible for anything but the smallest instance sizes, so a sentence like "computationally infeasible for NP-hard problems" (line 21) does not make sense. If we want to discuss the optimisation problems associated with an NP-hard problem, e.g. finding the optimal route of the travelling salesman problem or finding the maximum independent set, this is a different computational complexity class with it's own hierarchy. This is especially important once we care about approximations, doubly so when we want to considered the randomized approximation algorithms, however, any such discussion is missing.
2. Various statements about the quality of the solutions of the algorithm are imprecise or plain wrong. For instance, in line 136 the claim is made "there exist a reasonable graph comparing functions that i) are efficiently computable ii) lead to near optimal solutions" with no reference to any proof or evidence. These baseless claims decrease the authors credibility significantly. (another example can be found in remark 4 the second sentence).
3. The computational computational complexity of the algorithm is never discussed, which is particularly relevant given we are approximating solutions and thus there is a fundamental trade-off between time and the quality of the solution found. The algorithm proposed is of high-computational complexity, a single GEM module layer as proposed in equation 1. is already of complexity O(n^2) where n is the number of nodes. This would quickly be infeasible for problems of an interesting size and also raises the question why a GraphTransformer architecture wasn't considered. Furthermore, the GEM layers need to be applied O(n) time in the worst case, as the recursion depth of algorithm 1 is O(n) in the worst case. Thus giving us at least O(n^3) at inference time, neglecting the training time here, making the algorithm expensive to run. This is not compared against any other method either, e.g. the computational complexity of traditional solvers like Gurobi or other baseline methods is also not given.
4. Theorem 1 seems to be missing a +1, presumably when we remove a node v from the graph and add it to the independen set this should lead to a +1 appearing somewhere in the equation of |MIS(G)|.
5. The computational complexity of computing the expectation in the loss function given in equation 2 seems prohibitively high and there is no discussion about the variance in estimating the expectation by using only samples. In practice, this seems to be replaced with roll-outs another form of probabilistic estimation, whose complexity or variance remain unmentioned. Again, a claim is made that these roll-outs are better than the original expectation without any empirical or theoretical evidence.
6. The empirical evaluation removes the results from the RUN-CSP baseline with the words "unexpected performance" without any further justification, this is highly unusual and not acceptable practice.
7. Table 1 uses an unexplained unit of s/g.
8. The benchmark datasets aren't justified, why is it interesting to compute the MIS of a twitter graph? Given that the graph distribution will be highly relevant to performance this is important. While it is understandable to re-use benchmark datasets that prior work has used, it doesn't help the paper if the benchmark datasets aren't specfic to any real world problem people care about. At least one dataset needs to be of relevance to a real world problem, e.g. MIS is used in compiler optimisations.
9. There is no adequate discussion of running time or discussion of any benefits of the proposed method or DNN methods in general over classical optimisation solvers for the problem studied. Indeed, from Table 1. I cannot discern why a practioner would use the method proposed over Gurobi over SCIP.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: None.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: No the limitations are not adressed, see the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer qbTV for their feedback. We address their concerns below:
> Q1: MIS cannot be classified as NP-hard or anywhere in that complexity hierarchy
We respectfully disagree with the reviewer. Maximum independent Set (MIS) is an NP-hard problem (e.g. see [D] published in ICML’20).
Indeed MIS does not belong in NP since it is not a decision problem. However, contrary to NP-complete problems, NP-hard problems do not necessarily lie in NP. Up next we quote [A], that is the standard textbook on approximation algorithms.
P-459: “We conclude this section by defining the term NP-hard, which can be applied to either decision or optimization problems. Roughly speaking, it means as hard as the hardest problem in NP ”
P-459: Definition B.9 (NP-hard): A problem A is NP-hard if there exists a polynomial-time algorithm for an NP-complete problem B when the algorithm has oracle access to A (notice that is A not required to belong in NP.)
Maximum Independent Set is not only NP-hard but in fact it is highly inapproximable. More precisely, For any $\epsilon>0$ there is no $1/n^{1-\epsilon}$−approximation polynomial-time algorithm for Maximum Independent Set unless P=NP [A-C]. As a result, solving MIS (even approximating it) is a computationally intractable task. Under this perspective, we do not understand why the following lines are plain incorrect.
Line 181 -> “Since MIS is an NP-Hard problem, annotating such data comes with an insurmountable computational burden”.
Line 21 -> “Annotating 20 such data requires the solution of a huge number of instances of the CO, hence such supervised learning approaches are computationally infeasible for NP-hard problems”.
[A] The Design of Approximation algorithms, Williamson et al. 2009
[B] Linear degree extractors and the inapproximability of max clique and chromatic number, Zuckerman et al. 2007
[C] https://webdocs.cs.ualberta.ca/~zacharyf/courses/approx_2014/notes/nov26-675.pdf
[D] Learning What to Defer for Maximum Independent Sets, Ahn et al. 2020
____
> Q2: Various statements are imprecise or plain wrong
There could be a misunderstanding here, so please let us paste the actual line 136:
“However, there might exist a reasonable graph comparing functions that i) are efficiently computable ii) lead to near-optimal solutions”.
Thus, we do not claim/prove that such a graph comparing function exists. The goal of this paper is to derive such a GNN-based graph comparing function that performs well (producing large independent sets) in graphs of interest.
In Remark~4 we write that the non-convex problem of Eq 2 can be treated with a first-order method such as SGD that in practice performs well in the non-convex optimization settings arising in the context of ML. The latter is verified by our experimental evaluations indicating that the self-supervised approach improves over training. Beyond the aforementioned clarifications, we are open to suggestions from the reviewer to improve the clarity of our writing.
____
> Q3: Computational complexity
In the revised version we will explicitly write the worst-case computational complexity of our inference algorithm that is indeed O(n^3). In Table~1 we compare the running time of our method with respect to previous deep learning approaches as well as Gurobi and SCIOPT. Note that s/g stands for seconds/graph and was used in previous works as well, e.g. [1, 2]. We also remark that the worst-case complexity of Gurobi and SCIOPT is exponential in n since otherwise $P = NP$.
____
> Q4: Missing a +1 in Theorem 1
Let us first emphasize that Theorem 1 is correct (a +1 is **not** missing).
Notice that |MIS(G/{v})| refers to removing the node from G and thus $v$ will not be contained in the produced solution (that is a subgraph with isolated nodes). The case where $v$ is part of the produced solution is captured in |MIS (G/N(v))| at which all neighbors of $v$ are removed and thus $v$ will become an isolated node that will never be removed in the recursive calls.
____
> Q5: Variance in estimating the experimentation
The variance of the process is at most $n$ since the random variable can take value at most $n$. In our experimental evaluations we used 5 roll-outs which is enough to sufficiently reduce the variance of the process.
____
> Q6: Not acceptable practice to remove the RUN-CSP baseline
We respectfully disagree with the reviewer. We use the open-source code (of the authors) for RUN-CSP model. When this code is trained on RB and COLLAB datasets, it obtains a poor performance. Therefore, instead of misleading the readers with a low performance, we prefer to indicate that we obtain an unexpected performance, such that other researchers can validate or refute our claim. If the reviewer has any recommendation on the topic, we will gladly change the phrasing to better reflect our case.
____
> Q7: Table 1 uses an unexplained unit of s/g
Seconds/graph; we will clarify this. This is standard notation in deep learning papers, such as [1].
____
> Q8: Justification for benchmark datasets
In our experimental evaluations we used the benchmarks used in the previous DNN approaches, e.g. [1], so as to provide a fair comparison.
____
> Q9: Comparison with classical optimization solvers
We remark that our method achieves the best performance across previous DNN approaches for MIS that have already been published in top-tier conferences as NeurIPS. We agree with the reviewer that learning to optimize is at an infantry level and in most CO settings of interest is outperformed with standard solvers and algorithms that have been tough developed for decades. We believe that the latter does not lower either the value of our work or the value of learning to optimize line of research.
We are happy to address any other concerns the reviewer qbTV might have.
[1] Karalias, Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. NeurIPS, 2020
---
Rebuttal Comment 1.1:
Title: Are there any remaining concerns?
Comment: Dear reviewer qbTV,
we are thankful for your time and effort to review our work.
Based on your original review, our responses clarify that MIS is an NP-hard problem (correctly stated in the manuscript) and the validity of our theorem. We also elaborate on the statements flagged by the reviewer. We are wondering whether the reviewer has any other concerns regarding our work.
Best regards,
Authors
---
Rebuttal 2:
Title: Thank you for the updates and the increased score
Comment: Dear Reviewer qbTV,
Thank you for the revised remarks and for increasing your score.
We confirm that we will do our best to improve the paper in the final version and include the recommendations of all reviewers. Concretely, based on your response:
*Q8*. If the reviewer has any concrete datasets, we would be happy to take a look and try to conduct a preliminary experiment in the final version. That being said, the datasets that we used are the ones that previous DNN approaches use for MIS, with those papers being accepted in top-tier machine learning conferences, often as orals [A].
*Q2*. We will rephrase Remark 4.1 to avoid any misunderstanding.
*Q5*. We will explicitly mention the number of roll-outs used to estimate the expected value and the variance.
*Q6*. We will further clarify that the RUN-CSP presented unexpectedly poor performance.
*Q7*. We will explicitly mention graphs/second. We are thankful for bringing this to our attention.
*Q9*. We will further discuss the limitations of our approach and we sincerely hope that the research community further builds on our result and develops the interplay between Dynamic Programming and self-training technique
Once again, we are thankful for the revised remark and the revised score.
[A] Karalias, Loukas. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. NeurIPS, 2020. | Summary: This paper proposes a novel graph neural network (GNN) framework for solving the maximum independent set (MIS) problem. The idea is to use a randomized divide-and-conquer algorithm (which is termed as dynamic programming in the paper though) to pick a random node or its neighborhood and divide the problem into two subproblems recursively. In order to choose one of two actions at each step, the authors propose to parameterize the comparator with a GNN over the current graph structure. The advantage of this method is that it can bootstrap training data generation with a trained GNN, and then improve the GNN with the new training data. This reduces the cost of training data generation for MIS, which is an NP-hard problem. Experiments on 4 datasets show that the proposed CMP method achieves the best results among all deep-learning-based methods under similar time budgets.
Strengths: - S1: This paper proposes a novel solution for the MIS problem. It induces a random MIS inference algorithm from a comparator function, where the comparator function is parameterized by a GNN model.
- S2: One of the challenge for NP-hard problems is the cost for generating annotated data. The authors employ a smart way to bootstrap data generation from a learned GNN comparator and use the generated data to further train the model.
- S3: Experiments on 4 datasets show that the proposed CMP method achieves the best result among all deep-learning-based methods under similar time budgets.
Weaknesses: - W1: The paper mistakes recursive algorithms for dynamic programming. This makes the paper somehow hard to understand. Dynamic programming is for problems with overlapping sub-problems. If the sub-problems are non-overlapping, then it is called divide-and-conquer algorithms. If every action only results in a single sub-problem, then it is merely a normal recursive algorithm. According to Alg. 1, the proposed model is a recursive algorithm.
- W2: The writing of this paper can be improved. The authors don’t emphasize the randomized nature of their inference algorithm very much, so it’s hard to understand what an expectation over the inference algorithm is at first glance. As the randomized inference algorithm is the key technique to achieve the goal of self-training, the authors may emphasize it more in the abstract and the introduction.
- W3: There are some concerns regarding the time complexity of the proposed CMP model. According to Alg. 1 and Eqn. 1, the time complexity of this paper is $O(n^2|MIS(G)|)$ in the worst case. The datasets used in this paper mostly contain around a hundred nodes, which can’t reveal the disadvantage in the time complexity. It’s recommended that the authors analyze this in the paper. Also is it possible to avoid $O(n^2)$ dense propagation in the GNN?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Q1: Line 20-21: According to [1], any polynomial-time sampler generates an easier sub-problem of an NP-hard problem. It looks to me that the CMP model can’t avoid this problem, either. Can you explain that?
- Q2: Line 22-23: How do you circumvent this difficulty? Please be explicit in the introduction.
- Q3: Algorithm 1: Please emphasize this is a randomized algorithm.
- Q4: Line 183-184: This sentence requires more clarification. From my understanding, a model can’t learn anything from what it infers, unless it is a randomized algorithm or some planning algorithms as in this paper.
- Q5: Algorithm 2 Line 5: Where is $G_{init}$ used?
- Q6: Line 240-241: It’s a little bit confusing that you replace the basic pipeline, which is something you proposed, with two other approaches. You may say “improve the basic pipeline by …”. The title of the paragraph may be changed to “Better MIS estimation”.
- Q7: Line 249: Mixed roll-out variant should be parallel with the first approach, not with “Estimating the MIS”. It may be renamed to “Mixed roll-out estimator”.
- Q8: Line 316-317: What do you mean by “core modules”?
[1] Yehuda et al. It’s not what machines can learn, it’s what we cannot teach. ICML 2020.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors discuss the limitations but not the societal impact in the paper. It seems that the proposed method will not cause obvious negative societal impact, but the authors are encouraged to discuss it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer rVy4 for their thoughtful feedback. We address their concerns below:
> W1: Is dynamic programming appropriate for this paper?
MIS is a problem with overlapping subproblems - given the maximum independent set of the sub-graphs $G/\{u\}$ and $G/\{N(u)\}$, one can directly find the maximum independent set of G. Notice that the computing MIS on $G/\{u\}$ and $G/\{N(u)\}$ are overlapping subproblems since $G/\{u\}$ and $G/\{N(u)\}$ share both edges and nodes. That is the reason we used the term Dynamic Programming. That being said, we are open to suggestions for improving the clarity of our work.
____
> W2: Randomized nature of the inference algorithm
Thank you for the remark. The randomized version of our inference comes from the fact that at each level of the recursion, the under examination node is sampled uniformly at random. In our revision we will further emphasize the randomized nature of our approach as well as its importance.
____
> W3: “There are some concerns regarding the time complexity of the proposed CMP model. According to Alg. 1 and Eqn. 1, the time complexity of this paper is in the worst case. The datasets used in this paper mostly contain around a hundred nodes, which can’t reveal the disadvantage in the time complexity. It’s recommended that the authors analyze this in the paper.”
The reviewer is right, the time complexity of our algorithm is $O(n^3)$ in the worst case. In the revised version of our work we discuss the time-complexity of our method in detail. In the experimental evaluations we used the exact same datasets of the previous DNN approaches for MIS at which our method performs similar or even better performance with respect to the running time (see s/g in Table 1). The main scope of this work is how to blend ideas from theoretical computer science with neural network approaches. Concretely, we aim to investigate how to couple classical algorithmic design techniques with deep learning models. Scaling our method as well as the previous DNN approaches to graphs of millions of nodes is an interesting and important future research direction. We will discuss this in the final version.
____
> W4: Is it possible to avoid dense propagation in the GNN?
We appreciate the recommendation from the reviewer. Inspired by this, we conducted the related experiment and verified that the complexity without the anti-neighbors is lower, but the performance deteriorates. The results in Table 1 (in the 1-page pdf) indicate how the performance without the anti-neighbors (AN) deteriorates. We will include the experiment in the revised version.
____
### Questions
Q1: We first remark that using data sampled from a distribution of “easier instances” with respect to the worst-case instances of the CO setting of interest, is not necessarily futile. In fact the key motivation of the learning to optimize literature is that in many cases, the instances of interest admit an easier combinatorial structure. Secondly, our approach does not lie in the considered setting of [1] since it uses a polynomial-time sampler that is not static but evolves over time as the weights of the GNN model evolve (notice that our dataset is augmented according to the choices of the model at each epoch).
Q2: We circumvent the difficulty of annotating such data with our DP/bootstrapping approach of self-annotation through our GNN-induced algorithm. In the revised version of our work we will further elaborate on this part.
Q3-Q4: We will fix those, thank you for the suggestion.
Q5: $G_init$ is just a randomly sampled graph from the distribution $\mathcal{D}$.
Q6-Q7: Thanks for the suggestion, we will update to “Better MIS estimation” and “Mixed roll-out estimator”.
Q8: We appreciate the attentive reading of the reviewer; we meant that we used only basic modules and not more advanced architectures, e.g. GAT, etc. We will rephrase it to `basic components’ to avoid any confusion.
We are happy to address any other concerns the reviewer rVy4 might have.
---
Rebuttal Comment 1.1:
Title: Are the concerns of the reviewer rVy4 addressed?
Comment: Dear reviewer rVy4,
we are thankful for the constructive questioning and the overall appreciation of our work.
Our responses clarify the why dynamic programming is appropriate, the randomised nature of the inference as well as the questions raised by the reviewer. We are wondering whether the reviewer has any remaining questions. We are happy to elaborate further, since we consider this work introduces novel elements, e.g., with respect to data annotation in MIS.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks the authors for their detailed reponse. The authors successfully addressed my concerns. As my score is already positive, I tend to keep my score.
Regarding W1, I guess what you want to express is: although $G/u$ and $G/N(u)$ are not the same problem, their subproblems may share and it constitutes dynamic programming. Is that right?
---
Reply to Comment 1.2.1:
Title: Thank you for the support and the strong feedback
Comment: Dear Reviewer rVy4,
Thank you for your response. We are glad that our response addressed your concerns.
*Regarding W1,... Is that right?:* Yes, this is exactly what we mean. In the final version of the paper we will include the discussion on the latter.
If the reviewer has any other suggestions, they are more than welcome.
Thank you again for your support and constructive feedback.
Best regards,
Authors | null | null | null | null |
EvoPrompting: Language Models for Code-Level Neural Architecture Search | Accept (poster) | Summary: This paper proposed an interesting idea for neural architecture search. Specifically, the idea is based on evolutionary computation and an LLM, where the LLM plays the role of genetic search, i.e., crossover operation and mutation operation. The experiments are conducted on two benchmarks: MNIST-1D and CLRS, which are for convolutional network networks and graph neural networks, respectively.
===================================
Thanks very much for the authors' effort in explaining. As can be seen, the necessary ablation study is still not provided for the justification. However, considering the idea is interesting in this work, I will adjust my score to be 'Borderline reject'.
Strengths: The LLM is explored for the neural architecture search.
Weaknesses: The experiments can only serve the purpose of demonstration, while the usefulness of the idea is not clear. The main problem is the experiments are not common ones within the community of neural architecture search, and the compared algorithms are not the state of the arts in the community. The use of LLM plays the role of crossover and mutation, while the ablation studies are missing for the verification of LLM better than the original crossover operator and mutation operator. In addition, the fitness function is quite odd because it is obtained through the test from the authors, while the motivation is not clear.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: This paper mentioned multiple times that existing evolutionary approaches for neural architecture search have required careful design and specification of a discrete search space, while the proposed method includes any neural network architecture. I am not sure of the particular meaning of: "any" in this claim. However, I would like to say that the references mentioned for the evolution approaches just used fixed-length encoding, so they may have that limitation. In addition, there is also some work [1] using evolutionary approaches for neural architecture search with variable-length encoding, the search space also includes any neural network architectures.
[1] Sun et. al, “Evolving deep convolutional neural networks for image classification,” IEEE Transactions on Evolutionary Computation, vol. 24, no. 2, pp. 394-407, 2020.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time and energy to read our paper and provide feedback. Below, we provide responses to the main concerns and questions raised in your review.
"I am not sure of the particular meaning of: "any" in this claim."
- Sorry for the confusion – we use "any" to distinguish EvoPrompting from existing NAS methods, which are constrained by fixed operator vocabularies and patterns for combining those operators. For example, a common NAS practice is to have an "activation" operator (e.g. swish, GELU, ReLU) that can take N possible values. In contrast, EvoPrompting does not have this restriction and can output python code that encodes other activation functions as well, even ones that may not have been used before in other architectures.
- In reference to the issue of fixed- vs. variable-length encoding, our approach is indeed limited by the context length of the LM, and we are happy to add this caveat to the paper. (Thank you!) However, this limitation may be mitigated in a couple of ways: 1) the context lengths of modern LMs are constantly increasing as progress in the field continues rapidly, and 2) architectures are modular - EvoPrompting may be applied sequentially or in parallel to optimize individual sub-components of architectures, just as we did in Section 4.2 for the triplet processors of GNNs.
"The experiments can only serve the purpose of demonstration, while, the usefulness of the idea is not clear"
- EvoPrompting provides multiple benefits not just for NAS, but for applications of LLMs. Whereas other NAS methods rely on a hand-designed search space consisting of a finite set of building blocks, EvoPrompting can generate potentially any architecture, since it directly generates code instead of discrete building blocks. This is flexible, more expressive, and easier to use than approaches that require the hand-designed search space. The generated architecture may include components that the human designer did not think to include in the search space.
- We additionally show in our ablation studies that EvoPrompting is significantly more effective than applying a language model alone – it far more effectively completes this difficult task that LLMs had not been evaluated on before and even proposes novel architecture designs that beat the current SoTA without adding more model capacity. Our search requires only 1600 samples for both MNIST-1D and CLRS, while both **finding SoTA architectures and improving the Pareto frontier of model size/test accuracy**. Previous multi-trial NAS efforts have required anywhere from O(10K) ([So et al.](https://arxiv.org/abs/2109.08668)) to O(1T) ([Real et al.](http://proceedings.mlr.press/v119/real20a/real20a.pdf)) examples.
- While developing novel GNNs may not be quite as competitive as developing more common architectures like Transformers, they are still very important to parts of the ML community. To scale up to more competitive tasks would require more compute than we had the budget for – for example, pre-training competitive language models is very expensive.
"ablation studies are missing for the verification of LLM better than the original crossover operator and mutation operator"
- There is unfortunately no "original" crossover and mutation operator - these operators vary vastly in design across NAS works because they are search-space specific. One of the benefits of EvoPrompting is that it does not require a hand designed search space and so comparing to such a search space and its corresponding mutator operators would introduce too many confounders to be useful.
- We compared implicitly to the "conventional" mutators and crossovers in methods published on the NATS-Bench benchmark.
- We also conducted ablation studies of simpler crossover/mutation operators that are natural with an LM setup, such as using naive few-shot prompting or selecting random parents to crossover from.
"the fitness function is quite odd because it is obtained through the test from the authors, while the motivation is not clear"
- Our fitness function is simply a way to weight the val. error by model size, which penalizes the more trivial designs that improve upon validation accuracy simply by increasing the number of parameters. It is common in evolutionary NAS to either use the val. accuracy as the fitness function ([Liu et al.](https://ieeexplore.ieee.org/abstract/document/9508774)) or a weighted combination of the val. accuracy and other factors ([Branke et al.](https://link.springer.com/book/10.1007/978-3-540-88908-3), [Groh and Kist](https://ieeexplore.ieee.org/abstract/document/10189194))
"The main problem is the experiments are not common ones within the community of neural architecture search, and the compared algorithms are not the state of the arts"
- We did conduct a comparison against several common multi-trial and weight sharing NAS methods (including the commonly used DARTS) on the NATS-Bench benchmark, which is **common and standard within the NAS community and covers the widely known CIFAR-10, CIFAR-100, and ImageNet benchmarks**. This comparison does handicap EvoPrompting in several key ways (discussed in Appendix A.9 and referenced in the conclusion), but EvoPrompting still performs comparably to the other NATS-Bench methods, despite not being able to use its full functionality.
- However, multiple past works ([Yu et al.](https://arxiv.org/abs/1902.08142), [Bender et al.](https://arxiv.org/abs/2008.06120), [Li and Talwalker](https://arxiv.org/abs/1902.07638)) have noted the unfairness and confounding nature of NAS comparisons with unequal search spaces. Since EvoPrompting is not designed for use with a finite and discrete search space (i.e. a set of pre-defined building blocks) and instead offers a flexible search method capable of generating more novel and varied architectures via directly generating code, there is no clear and fair way to directly compare EvoPrompting against other NAS methods.
---
Rebuttal Comment 1.1:
Title: Re: ablations
Comment: Thanks for reviewing our rebuttal! We appreciate it. Concerning the ablations, we wanted to re-emphasize that:
(1) Our paper includes **ablation studies of simpler crossover/mutation operators** that are natural with an LM setup, such as using naive few-shot prompting or selecting random parents to crossover from.
(2) There is unfortunately no "original" crossover and mutation operator - these operators vary vastly in design across NAS works because they are search-space specific. One of the benefits of EvoPrompting is that it does not require a hand designed search space and so comparing to such a search space and its corresponding mutator operators would introduce too many confounders to be useful.
(3) We have also implicitly compared against the crossover/mutation operators used in the methods published on the NATS-Bench benchmark.
**An important question**: Maybe it'd be helpful if you could specify what you mean by the "original crossover operator and mutation operator"? It's hard for us to conduct these experiments or respond without understanding clearly what this experiment would be. Thank you! | Summary: The paper presented an evolutional neural architecture search method that utilizing LLM as mutator. During the evolutional search, the LLM is updated according to the evaluation result. Meanwhile the evaluation results are fed back to the LLM during the mutation process in the format of in-context learning.
Strengths: The work explores a novel direction of using LLMs in NAS. Many efforts have been made to design this approach to enable the LLM in NAS as it is not particularly trained on this task. The experimental results also proved the naive way of using LLM is not as effective as the proposed one.
Weaknesses: One missing part of the paper is why we will be interested in using LLM for NAS. Also the experiment mainly demonstrates the proposed method is better than naive prompting methods. How does the NAS with LLM compare to the existing NAS method?
The experiments are better to be conducted with large scale realistic datasets.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. How does the proposed method perform comparing to the existing non-LLM method? Given the same number of search? And what’s the computation resource used for LLM search method and nonLLM methods?
2. The author mentioned the CNN architecture search might be in the training set of the LLM. Does author train the LLM? Or is it a publicly accessible model?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: The authors have not discussed the limitation and border impact. However, the reviewer has not seen any significant limitation of the method, and potential negative border impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on providing valuable feedback on our work. Below we provide responses to the concerns mentioned.
"One missing part of the paper is why we will be interested in using LLM for NAS"
- Lines 139-150 in our paper are relevant to this question. LLMs have demonstrated exceptional competence at generating code ([Xu et al.](https://arxiv.org/abs/2202.13169)), and since all neural architectures can be encoded via code, LLMs offer a more expressive search tool than the finite hand-designed search spaces that most NAS algorithms entail. The LLM may even generate components that are not typically seen in NAS search spaces. Furthermore, using an LLM introduces other skills that the LLM may have learned from its training, such as the ability to generate high-likelihood outputs, to condition on a target reward, and to transfer learnings from other skills seen in the data. LLMs also introduce less manual engineering/human bias than hand-designing an optimal search space would be. Lastly, LLMs can also be tuned to better adapt to the task, making the LLM an adaptive crossover operator.
"How does the NAS with LLM compare to the existing NAS method?"
- We did attempt a comparison against several common multi-trial and weight sharing NAS methods on the NATS-Bench benchmark. This comparison does handicap EvoPrompting in several key ways (discussed in App. A.9 and referenced in the conclusion), but EvoPrompting still performs comparably to the other NATS-Bench methods, despite not being able to use its full functionality.
- However, multiple past works ([Yu et al.](https://arxiv.org/abs/1902.08142), [Bender et al.](https://arxiv.org/abs/2008.06120), [Li and Talwalker](https://arxiv.org/abs/1902.07638)) have noted the unfairness and confounding nature of NAS comparisons with unequal search spaces. Since EvoPrompting is not designed for use with a finite and discrete search space (i.e. a set of pre-defined building blocks) and instead offers a flexible search method capable of generating more novel and varied architectures via directly generating code, there is no clear and fair way to directly compare EvoPrompting against other NAS methods.
"The experiments are better to be conducted with large scale realistic datasets."
- We were unfortunately compute-constrained – you may notice that we only used a single P100 GPU to train each child model and only prompt-tuned (instead of fine-tuning) the LM. We agree that some interesting results could be obtained from evaluating EvoPrompting on designing larger scale architectures, but given that EvoPrompting was already able to propose multiple novel, non-trivial, and state-of-the-art architectures for a difficult algorithmic reasoning task, we believe this demonstrates the promise and broader applicability of EvoPrompting. Furthermore, our approach often accomplished this via designing architectures of smaller size than the state-of-the-art, suggesting that EvoPrompting can design architectures that scale more efficiently.
"The author mentioned the CNN architecture search might be in the training set of the LLM."
- Sorry for the confusion – we do not mean that the exact architectures generated were in the training set. We only mean that since LLMs are trained on large corpora scraped from the Internet, it has likely seen code for many different convolutional nets before. However, the experiments on MNIST1D are still interesting because it is non-trivial to optimize the best form of CNN needed for a particular task and EvoPrompting was not limited to proposing CNNs. Since EvoPrompting accomplished this task significantly better and more efficiently than the baseline methods, this section shows the promise of EvoPrompting for optimizing neural architectures. But to further address this issue, we also included the CLRS experiments, since this is a newer benchmark and GNNs are studied much less. The novel state-of-the-art GNNs suggested by EvoPrompting contained modifications that none of the authors had seen in prior work, and therefore seemed much less likely to be purely a result of copying the training data.
"Does author train the LLM? Or is it a publicly accessible model?"
- We did not pretrain or finetune the LLM ourselves – we only prompt-tuned it as part of our algorithm (Alg. 1, step 11). It is a publicly available model, but we elided the model name for anonymity reasons. Thanks for pointing out the ambiguity, we will be sure to clarify these details in the next version of the paper.
"How does the proposed method perform comparing to the existing non-LLM method? Given the same number of search? And what’s the computation resource used for LLM search method and nonLLM methods?"
- We presented a comparison against non-LLM NAS methods in App. A.9 on NATS-Bench. However, we noted that these results are difficult to interpret due to the unequal search spaces. When handicapped in this way, EvoPrompting performs comparably to the other NAS methods. (However, we believe that one of EvoPrompting's key strengths is not represented in this comparison – i.e. its ability to generate architecture components that are not represented in traditional search spaces.) We followed the time budgets recommended in [Dong et al.](https://arxiv.org/pdf/2009.00437.pdf) and kept the other hyperparams the same, meaning that EvoPrompting always completed its search with fewer than 1600 samples, whereas most methods involve anywhere from O(10K) ([So et al.](https://arxiv.org/abs/2109.08668)) to O(1T) ([Real et al.](http://proceedings.mlr.press/v119/real20a/real20a.pdf)) samples. However, the NATS-Bench paper did not indicate the no. of samples, FLOPS, or GPU-hours required by each method.
- We used ~130 and ~800 GPU hours to train the child models in the MNIST-1D and CLRS experiments, respectively. (However, this cost would likely be shared by other multi-trial NAS methods.) We used ~24 TPU hours on LM inference and prompt-tuning.
---
Rebuttal Comment 1.1:
Title: Thanks for the response, my score is unchanged.
Comment: Thanks authors for the responses. I generally agree with most the intuition and potential benefits of LLMs that authors mentioned in the response: flexible space, knowledge transferred from other training tasks that could lead to high probability good results, etc. However, I don't think the experimental results solidly supports those benefits. Especially, A.9 is emphasized but its result can not conclude any of the baselines or the EvoPrompt is better than others.
---
Reply to Comment 1.1.1:
Title: NATS-Bench results
Comment: Thanks for responding to our rebuttal, we appreciate it!
- Re: NATS-Bench results - we noted in both App. A.9 and our previous response that this comparison is unfair and handicaps EvoPrompting in multiple ways, since it was not designed for this kind of use. (We also evaluated our method with far fewer samples, the details of which are mentioned in the previous response.) Furthermore, multiple past works ([Yu et al.](https://arxiv.org/abs/1902.08142), [Bender et al.](https://arxiv.org/abs/2008.06120), [Li and Talwalker](https://arxiv.org/abs/1902.07638)) have also highlighted the confounding nature and unfairness of comparing NAS techniques across unequal search spaces.
- We don't claim that EvoPrompting is strictly better than the other methods evaluated in [Dong et al.](https://arxiv.org/pdf/2009.00437.pdf) (for this particular setting), only that it performs *comparably* in this setting, even despite the handicapping, the smaller amount of trials, and the unequal search spaces. (On every NATS-Bench task considered, EvoPrompting has val/test accuracy that is in the middle of the pack.) EvoPrompting is fundamentally a **different approach designed to support a wider variety of settings** (e.g. settings without hand-designed, finite search spaces) than the methods compared to in the NATS-Bench evaluation.
- EvoPrompting is better suited for settings without a pre-designed search space, as we demonstrate with our CLRS experiments -- our approach is able to propose **novel and non-trivial SoTA architectures** that generalize to other algorithms not seen during the search itself. (Furthermore, none of the ablated methods or baselines we considered accomplished this.) The previous SoTA (Triplet-GMPNN) was designed with careful and thoughtful manual design and experimentation. It is meaningful that our approach can propose an architecture that significantly out-performs Triplet-GMPNN, even using very few samples and without a hand-designed search space. | Summary: This is an example of LLMs being used in evolutionary algorithms. The authors use an LLM to crossover code snippets that execute to define a graph or neural architecture. It does this by generative means, not syntactic manipulation as typically in EAs. They use code in the prompt for context and they improve the crossover by including code snippets that resulted in better results. The prompt also has ranges for the desired size of the model and accuracy what are set relative to a parent model, allowing them to guide an incremental improvement in accuracy while keeping model size in check. They empirically evaluate on MNIST-1D and on CLRS. On MNIST-1D they get variants in terms of accuracy and model size that are comparable to human designs or naive few shot prompting. On CLRS they get novel architectures that are better than a rather modest benchmark on 21/30 tasks (related to algorithm design). The added bonus is a method that improves prompts through evolutionary search.
Strengths: EvoPrompting is based on in-context prompting and the variation of the "candidate solution" occurs on a code representation. This changes the search space and makes problem solving more flexible and less reliant on adhoc choices on a different representation level.
In general, the approach is applicable to LM tasks that rely on in-context learning (ICL) or prompt-tuning. These approaches don't need gradients.
The results are promising on the CLRS Algorithmic Reasoning Benchmark but not compared to other NAS techniques. Novel architectures were discovered and they were smaller and had better accuracy than the benchmark.
Weaknesses: The choice of what's in the appendix and in the paper must be hard but I feel at least one code snippet should be shown. The paper could do with one entire example of the prompt and response.
The search space variation in the MNIST-1D benchmark was modest. For the comparisons with CLRS, one architecture was the compared benchmark for all programs, while EvoPrompting was judged when it was allowed to evolve a a solution for each problem independently. Authors mention how one evolved model was evaluated with 3 different problems but no others.
There is no comparison of the effort expanded to evolve and test vs NAS by another means.
If the generality of the approach as stated by the authors is to be taken seriously, evidence beyond the narrow scope of code variation and NAS is necessary. There are many many more problems than MNIST-1D NAS or CLRS NAS.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: How would evoprompting do on the problems in the GP symbolic regression benchmarks? How do they differ from CLRS and why or why are they not applicable here?
Can you compare to NAS?
How would you select one model from the 21 or 31 options and train them to be tested on all the other problems?
What if you changed the seed architectures? They seem pretty good in both problems.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and thorough feedback - we appreciate the significant time and effort this took. Your questions also gave us very interesting thoughts and directions to think about. We respond to some of the feedback and questions below:
"...not compared to other NAS techniques...Can you compare to NAS?"
- We actually did conduct a limited comparison of EvoPrompting to other common multi-trial and weight sharing NAS methods on the NATS-Bench benchmark. This comparison does handicap EvoPrompting in several key ways (discussed in Appendix A.9 and referenced in the conclusion), but EvoPrompting still performs comparably to the other NATS-Bench methods, despite not being able to use its full functionality.
- However, multiple past works ([Yu et al.](https://arxiv.org/abs/1902.08142), [Bender et al.](https://arxiv.org/abs/2008.06120), [Li and Talwalker](https://arxiv.org/abs/1902.07638)) have noted the unfairness and confounding nature of NAS comparisons with unequal search spaces. Since EvoPrompting is not designed for use with a finite and discrete search space (i.e. a set of pre-defined building blocks) and instead offers a flexible search method capable of generating more novel and varied architectures via directly generating code, there is no clear and fair way to directly compare EvoPrompting against other NAS methods.
"The choice of what's in the appendix and in the paper must be hard but I feel at least one code snippet should be shown. The paper could do with one entire example of the prompt and response."
- This is a good point – having an example would be helpful for the reader to understand both the context of how EvoPrompting works, and emphasize how the LLM is able to condition upon the desired test accuracy and model size. We'll be sure to move an example from the appendix into the paper in the next version.
"For the comparisons with CLRS, one architecture was the compared benchmark for all programs, while EvoPrompting was judged when it was allowed to evolve a a solution for each problem independently. Authors mention how one evolved model was evaluated with 3 different problems but no others."
- This is not quite correct – when applying EvoPrompting on the CLRS benchmark, we only applied evolution using validation metrics from a single algorithmic task at a time. However, at the end of EvoPrompting we then trained and evaluated the most fit model separately on all 30 algorithmic tasks in the CLRS benchmark to demonstrate how the suggested design could generalize to other algorithmic tasks that were not seen during EvoPrompting. We applied this process to a total of 3 tasks in the CLRS benchmark due to both computational constraints and there being more headroom in those tasks than others.
Similarly, the previous state-of-the-art architecture that we compared to (Triplet-GMPNN from [Ibarz et al.](https://arxiv.org/pdf/2209.11142.pdf)) was trained and evaluated separately on the 30 tasks. However, the authors hand-designed this algorithm, so it is not clear how many of the tasks were seen and used as validation during the design process.
"The search space variation in the MNIST-1D benchmark was modest."
- The search space itself in the MNIST-1D experiments was expansive – the LLM could generate any architecture (that could be expressed with code of length less than its maximum output length), and indeed many of the candidate designs included modifications beyond variations of CNNs, such as attention modules, recurrent or fully-connected layers, etc. However, the best performing child models were mostly optimized versions of CNNs.
- The CLRS experiments helped address the limitations of the MNIST-1D experiments - the resulting models were far more varied and there was significantly less alignment of the suggested designs with "standard" modules such as attention, recurrence, etc.
"There is no comparison of the effort expanded to evolve and test vs NAS by another means."
- We mentioned in section 4.1 that open-ended multi-trial NAS methods often require on the order of anywhere from O(10K) ([So et al.](https://arxiv.org/abs/2109.08668)) to O(1T) ([Real et al.](http://proceedings.mlr.press/v119/real20a/real20a.pdf)) samples, whereas our experiments only required ~1600.
- In our NATS-Bench comparison (referenced in the conclusion but detailed in Appendix A.9), we used the time budget recommended by [Dong et al.](https://arxiv.org/pdf/2009.00437.pdf) to demonstrate how EvoPrompting still performed comparably to other NAS methods under the same time budgets. However, the NATS-Bench paper does not mention other details about the exact computational resources allocated to each technique, which makes comparison difficult.
- Nevertheless, we're happy to include more details about the exact computational resources we used in our experiments – we currently mention that each child model was trained on a single P100 (not necessarily to convergence). We will insert more detailed estimates of the total amount of compute (~130 GPU hours and ~800 GPU hours for training the MNIST-1D and CLRS child models, respectively, which is a cost that would likely be shared by other multi-trial NAS methods; and ~24 TPU hours for running inference and prompt-tuning on the LLM across both tasks).
"How would evoprompting do on the problems in the GP symbolic regression benchmarks?"
- Given that the symbolic regression benchmarks have a continuous fitness which allows for iteratively improving the proposed answer, we would expect EvoPrompting to work reasonably well.
"What if you changed the seed architectures? They seem pretty good in both problems."
- We used a handful of baseline models (mostly suggested in prior literature) as the seeds for our analyses, in the hopes of finding architectures which would improve over what has already been suggested in past work. While bad starting models would likely damage performance, the process would still improve performance over the seeds.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications, they reinforce my evaluation. The comment about search space variation in the MNIST-1D benchmark being modest was about the range of variation, not the search space. | Summary: This paper introduces a method that uses LLMs as mutation and crossover operators in an evolutionary search process that generates diverse and high performing neural architectures. They evaluate their method on two datasets, MNIST-1D and CLRS, a benchmark measuring algorithmic reasoning. The evoprompting process generated SOTA models on CLRS and performed well on MNIST-1D. Importantly it generated nontrivial architectures.
Strengths: Originality:
The concept of using a sequence model to generate neural network architectures in NAS is not new (Neural Architecture Search with Reinforcement Learning by Zoph), but was previously very constrained because the parameters the sequence model output were simple, things like width and kernel size for a CNN. This model is much more expressive, can describe a much broader set of architectures, because it generates code. Combining LLMs and evolution is also not strictly novel, see Language Model Crossover: Variation through Few-Shot Prompting, but I believe this is quite different.
Quality:
I think the quality of the writing and the research is high. They show state of the art on an admittedly new benchmark. Nobody has done this before, they show that the sample complexity of the search is good.
Significance:
It demonstrates a practical way to leverage the capabilities of language models for complex tasks which has wide ranging implications. I could imagine this being extended to program synthesis for example.
I like that the authors included some sample architectures in the supplemental materials. It looks like the search discovered nontrivial architectures.
Weaknesses: The paper could be improved by:
- Comparing to other NAS approaches
- Evaluating on larger tasks. I don't fault the authors at all for this, NAS is expensive and I like that they used MNIST-1D, but more domains would strengthen the claims.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: How do you think this would fare with larger models?
Are there any specific domains where you think this approach would struggle?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful comments – we particularly appreciated your references to other related work and how those relate or are different from our work. We detail below our responses to some of the weaknesses and questions.
"Weaknesses: Comparing to other NAS approaches"
- We presented a limited comparison of EvoPrompting to other common multi-trial and weight sharing NAS methods on the NATS-Bench benchmark, which is common and standard within the NAS community. This comparison does handicap EvoPrompting in several key ways (discussed in Appendix A.9 and referenced in the conclusion), but EvoPrompting still performs comparably to the other NATS-Bench methods, despite not being able to use its full functionality.
- However, multiple past works ([Yu et al.](https://arxiv.org/abs/1902.08142), [Bender et al.](https://arxiv.org/abs/2008.06120), [Li and Talwalker](https://arxiv.org/abs/1902.07638)) have noted the unfairness and confounding nature of NAS comparisons with unequal search spaces. Since EvoPrompting is not designed for use with a finite and discrete search space (i.e. a set of pre-defined building blocks) and instead offers a flexible search method capable of generating more novel and varied architectures via directly generating code, there is no clear and fair way to directly compare EvoPrompting against other NAS methods.
"How do you think this would fare with larger models?"
- Although it's hard to know without further extensive experiments that we cannot conduct with our current computational constraints, we suspect that using EvoPrompting to improve larger architectures (e.g. Transformers) would require a more modular approach, similar to our CLRS experiments. That is, EvoPrompting could be applied to a single module at once (e.g. the attention mechanism) or applied at a higher level (i.e. being prompted to design the optimal combination of modules, given a list of the already-designed modules). Due to the inherent flexibility of LLMs, we anticipate that there are a wide variety of ways that EvoPrompting could be used to tackle this problem. The use of a longer LLM context would also be important, but the current research on how best to adapt LLMs to effectively use a longer context is still burgeoning. We look forward to exploring this in follow-up work.
"Combining LLMs and evolution is also not strictly novel, see Language Model Crossover: Variation through Few-Shot Prompting, but I believe this is quite different."
- This is true, and a great point – we compare and contrast our work with this work in the Related Works section. We also note that this is concurrent work that was released at a very similar time as our paper.
"Are there any specific domains where you think this approach would struggle?"
- Our approach currently relies on the existence of a fitness gradient – if the rewards are more sparse, we would expect EvoPrompting to struggle to find better solutions. This is a general weakness of many evolutionary or RL-based approaches. However, if iterative improvement can consistently improve reward, then we expect EvoPrompting to be able to find high fitness solutions. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper explores prompt tuning for neural architecture search. While language models can produce code snippets with prompting, it is often quite difficult for language models to succeed at this task. The authors propose EvoPrompting, a prompt tuning method that produces neural architectures. The authors experiment with MNIST-1D where they find that EvoPrompting produces better performing neural networks with fewer parameters than the baselines. They then test their method on producing graph neural networks for algorithmic reasoning and find that they achieve the state-of-the-art on 21 out of 30 tasks.
Strengths: This is a high quality work with strong results on the CLRS algorithmic benchmark. The authors first carefully validate their method on a simple task MNIST-1D and find that their method produces smaller but effective neural networks compared to existing baselines. The results on the CLRS benchmark are quite impressive, sometimes improving higher than 20% (QuickSort) over the baseline while still having the same number of parameters.
Weaknesses: Analysis of the language model: the paper uses a large language model as a blackbox but fails to provide analysis about the language model itself. The paper trains a new 62B parameter decoder-only language model trained on conversational, web, and code documents. It would be useful to know if there are equivalent open-source models such as StarCoder that can be used to replicate the results? Furthermore, it is unclear how important the model size will affect the capabilities. It would be interesting to the readers to know the performance on CLRS across model sizes if possible.
Unclear use of “soft prompt-tuning”: The work uses soft prompting in each round to find better in-context learning prompts. However, it is not clear if soft prompting is the best method to find the relevant prompts. It would be helpful to compare soft prompting with related techniques such as adapters [a], LoRA [b], and T-Few [c].
Nit: the work could provide more background regarding genetic programming similar to [d].
[a] Parameter-Efficient Transfer Learning for NLP. ICML 2020.
[b] LoRA: Low-Rank Adaptation of Large Language Models. ICLR 2022.
[c] Improving In-Context Few-Shot Learning via Self-Supervised Training. NeurIPS 2022.
[d] Evolution through Large Models. 2022.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See the questions for weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The authors thank the reviewer for taking the time to carefully note both the strengths and weaknesses of our approach – we particularly appreciated the detailed notes about the particulars of applying LLMs and the associated implications for the performance of EvoPrompting.
"the paper uses a large language model as a blackbox but fails to provide analysis about the language model itself. The paper trains a new 62B parameter decoder-only language model trained on conversational, web, and code documents."
- Sorry for the confusion, we did not pre-train or fine-tune the LLM ourselves – it is an off-the-shelf LLM that is currently publicly available. We elided the name for anonymity purposes but will re-insert the details if the paper is accepted. We only prompt-tuned the model as part of the EvoPrompting algorithm.
- Since this is an off-the-shelf LLM that already has a paper published about its general performance across a variety of both language and code generation tasks, we did not feel the need to additionally analyze this model. We will include a citation to this LLM paper in the next version of the paper.
"Furthermore, it is unclear how important the model size will affect the capabilities. It would be interesting to the readers to know the performance on CLRS across model sizes if possible."
- We agree that it would be interesting to see how EvoPrompting performance scales as a function of LLM size, but unfortunately did not have the computational constraints to conduct extensive experiments across many LLM sizes.
- We did try preliminary experiments with a smaller LLM (~8B parameters) and found that EvoPrompting took a longer time and more samples in order to reach similar accuracies as the 64B model. The primary issue was that the smaller LLM was more likely to generate incorrect syntax or repetitive programs, thereby requiring sampling more programs than the 64B model required. This is a common issue for LLMs with less capacity, but it is possible that pre-training on more data could help resolve this. We did not have enough resources to pre-train an LLM ourselves, but hope to explore this with other LLMs in follow-up work.
"Unclear use of “soft prompt-tuning”: The work uses soft prompting in each round to find better in-context learning prompts. However, it is not clear if soft prompting is the best method to find the relevant prompts"
- Our choice of soft prompt-tuning (instead of prefix-tuning, adapter-based methods, or LoRA), was based both on ease of implementation and parameter efficiency. Our soft prompts had dimension 16. However, prefix-tuning ([Li and Liang](https://arxiv.org/abs/2101.00190)) requires adding a prefix to every hidden layer of the model, usually resulting in the tuning and storage of 0.1% of the number of model parameters, which would be 64M parameters in our case. Similarly, adapters ([Houlsby et al.](https://arxiv.org/abs/1902.00751)) usually require tuning and storage of 2-4% of the number of model parameters (~1.28B-2.56B parameters in our case). The dimensionality of LoRA updates vary depending on the desired rank, but the smallest number of trainable parameters explored in [Hu et al.](https://arxiv.org/abs/2106.09685) is 0.77M parameters.
- Given the computational resources, other parameter-efficient methods like LoRA, prefix-tuning, or adapters could result in even better performance, but EvoPrompting can use any model tuning method that involves likelihood maximization.
- We sadly did not have enough compute to run a thorough comparison of all these techniques, but we think it is still promising that our technique works with only a soft prompt of dimension 16 – we cannot claim that soft prompt-tuning is strictly the most optimal way to adapt the LLM during each round of EvoPrompting, but it requires tuning the fewest parameters. Given how expensive NAS methods already are, we think this is an important point.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for your response. The authors have clarified my questions. I will keep my score at 7. | null | null | null | null | null | null |
Interpretable Prototype-based Graph Information Bottleneck | Accept (poster) | Summary: The authors propose a new GNN explanation framework, which combines prototype learning and the information bottleneck. They define the Prototype-based Graph Information Bottleneck framework in detail, and experiments have demonstrated the effectiveness of their proposed framework.
Strengths: 1.The authors introduce the Graph Information Bottleneck based on prototype learning, which makes a significant improvement over the baseline methods.
2.The experiments are comprehensive and there are many additional experiments in the appendix.
3.The paper is well written.
Weaknesses: 1.I argue that the novelty of the paper is limited by the fact that both the IB approach and the prototype approach have been well discussed before, and the authors seem to have simply combined the two methods together.
2.The number of prototypes per class is an important hyperparameter and the author should add a hyperparameter analysis of it.
3.I am not quite sure about the differences between PGIB and ProtGNN. If I use the GIB strategy in ProtGNN, what effect will it have?
4.In this paper, authors use GIN as a backbone. However, PGIB as a model-agnostic approach should be validated on more GNN structures, e.g. GCN, GraphSage.
Others:
1.Line 143 "PGIB s a novel " ->"PGIB is a novel "
2.Line 275 "This is because When" -> "This is because when"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1.I wonder if the authors' proposed approach can alleviate the OOD problem which is widely discussed in the explanation GNNs[1,2].
2.I notice that the authors obtain two types of subgraphs, one in the Subgraph Extraction Layer and one in the Prototype Projection. In practice, which type of subgraph do we use as the final interpretation? What if the two types of subgraphs conflict?
3.If the interpretability of the prototypes is not considered, can the Prototype Projection method be removed?
References:
[1] Debiasing graph neural networks via learning disentangled causal substructure.
[2] Causal attention for interpretable and generalizable graph classification.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See Weaknesses for details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1.**
**# Method Selection Inspired by Motivation and Purpose**
As highlighted in our introduction, ProtGNN utilizes prototypes to explain the model's training process (i.e., reasoning process (RP)). However, we found that ProtGNN overlooks the key substructures in input graphs, leading to less informative RP and decreased performance in downstream tasks (Fig. 1(a)). Drawing from this observation, we recognize the importance of including informative information from the input graphs, while excluding uninformative information in the RP.
The key to addressing these challenges is through identifying the key substructure of an input graph. Among the methods available for extracting key substructures (e.g., GNNexplainer, PGexplainer, subgraphX, IB), we chose the IB-based approach because it is currently the most effective approach for identifying important substructures during the learning process. Unlike post-hoc methods, IB allows us to extract key substructures during the training process, aligning with our objectives. We would be delighted if the novelty of our work is recognized in the motivation behind our choices, rather than merely in the method itself.
**# Theoretical Analysis for a Well-Designed Loss Function**
Additionally, we want to clarify that our approach goes beyond a simple combination of existing methods. We designed and implemented our loss function based on a thorough theoretical analysis. Our work is the first paper to provide a theoretical framework for approaching IB from the perspective of prototypes. Specifically, we introduced a $L_{MI}^2$ loss, which not only improves the interpretability of the RP (See Fig 1(b)), but also the model performance on downstream tasks (See Fig 6(b)). The newly introduced loss, $L_{MI}^2$, plays a key role in enabling the mutual interaction between $G_{sub}$ obtained from the IB approach, and the prototype $G_p$.
**# Interpretability Stabilization via Prototypes Merging**
Lastly, it is important to note that, inspired by ProtoPShare, our work is the first work that introduces an effective method for merging prototypes in the graph domain, aimed at enhancing not only both the explanation of the reasoning process, but also the overall performance on downstream tasks.
We hope that this clarification highlights the contributions of our research and provides a better understanding of our work's motivations and goals.
**W2.**
We have conducted additional experiments where we varied the number of prototypes per class. Table PDF-5 shows the results of the graph classification according to the number of prototypes.
Additionally, we conducted interpretation visualizations of $G_p$ based on the number of prototypes in Figure PDF-4. When the number of prototypes is small, the prototypes do not contain diverse substructures. This limitation arises due to the necessity of making predictions using a restricted number of prototypes. On the other hand, if the number of prototypes is large, a greater diversity of prototypes can be achieved because various and complex information can be obtained from $G_{sub}$.
**W3.**
As mentioned in our response to W1, a naive combination of ProtGNN and GIB allows each to operate independently. As such, this simple combination does not consider the loss functions like $L_{MI}^2$ in PGIB, and thus, it leads to an uninformative reasoning process. This is because the prototypes may not adequately capture the key substructures that significantly influence the downstream label prediction. We want to emphasize that a naive combination of the prototype approach and the GIB approach is insufficient to enhance the interpretability of the reasoning process and the performance on downstream tasks; careful design of the losses is crucial.
**W4.**
We appreciate your valuable suggestions. In response, we conducted additional experiments using various GNN structures. The results are shown in the Table-P3. It demonstrates the strong performance of our model on different backbones.
**Others**
We appreciate your attention to the overlooked typos. We will make sure to fix them in the revised version.
**Q1.**
We sincerely appreciate your feedback, and we fully agree with your suggestion to include an OOD task. For these experiments, we utilized real-world molecule graphs, and followed existing studies [3,4] for OOD in which the data is split based on scaffold (i.e., scaffold split).
[3] Li, Haoyang, et al. "Ood-gnn: Out-of-distribution generalized graph neural network." TKDE (2022).
[4] Gui, Shurui, et al. "Good: A graph out-of-distribution benchmark." NIPS (2022).
We conducted scaffold-based OOD experiments to evaluate the generalization performance of our model. We split our dataset based on the scaffold in a ratio of 8:1:1 and obtain a train and test dataset with totally different distribution by including the scaffold containing the most data in training set. Table-P4 is the result of the experiment, and it demonstrates the generalization performance of our model in an OOD setting.
**Q2.**
We would like to emphasize that explainability can be viewed from two perspectives: 1) interpretability of the model's predictions, and 2) interpretability of the model's reasoning process. As you mentioned, we obtain two types of subgraphs: a) from the Subgraph Extraction Layer, and b) from the Prototype Projection.
- 'a)' provide interpretation for the model's predictions, giving insights into which subgraph is considered important when the model makes a final prediction.
- 'b)' provide interpretation for the reasoning process. In other words, each class can be represented by the prototypes, and we can interpret how the model represents each class during the training process.
We understand the reviewer’s confusion, and we will make sure to clearly explain them in the paper.
**Q3.**
Yes. If there is no need to interpret the reasoning process, it can be removed.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response. Although most of my questions have been addressed, I still have several questions. Specifically,
1.In Table PDF-3, authors compare PGB and PGB_cont with different network structures. However, I don't see the results for other baselines such as GSAT with GAT. I am not sure if PGB still maintains better performance with other network structures such as GAT.
2.I understand the authors' response to Q2. Can the author provide more specific examples to illustrate this process in detail? Authors can assume that PGB has been applied in practice, and what kind of explanation will PGB provide to enhance human's trust in PGB?
---
Reply to Comment 1.1.1:
Title: Response by authors
Comment: We appreciate your prompt response and valuable questions on our paper.
1.
We have conducted additional experiments on other baselines with various backbones. The following table shows the results of classification performance
| | | | GCN | | | | | GIN | | | | | GAT | | |
|:--------:|:----------:|:----------:|:-----------:|:----------:|:--------------:|:----------:|:----------:|:----------:|:----------:|:--------------:|:----------:|:----------:|:--------------:|:----------:|:--------------:|
| | ProtGNN | GSAT | GIB | VGIB | PGIB | ProtGNN | GSAT | GIB | VGIB | PGIB | ProtGNN | GSAT | GIB | VGIB | PGIB |
| MUTAG | 85.00±4.47 | 79.00±9.17 | 75.50±10.36 | 70.50±7.89 | **86.00±3.74** | 80.50±9.07 | 80.00±8.94 | 79.00±6.24 | 81.00±6.63 | **82.50±3.54** | 86.00±3.74 | 80.00±7.75 | 81.50±9.23 | 67.00±8.12 | **86.50±3.91** |
| PROTEINS | 71.96±1.34 | 70.45±3.10 | 70.80±6.93 | 75.54±5.04 | **77.86±1.48** | 73.83±4.22 | 69.64±4.71 | 75.25±5.92 | 73.66±3.32 | **77.50±2.42** | 72.14±1.54 | 68.21±3.37 | 75.45±3.42 | 70.09±6.47 | **78.21±1.04** |
| DD | 65.29±2.38 | 71.34±3.95 | 72.35±7.15 | 65.46±6.27 | **72.69±1.14** | 69.15±4.33 | 71.93±2.74 | 72.61±8.26 | 68.32±6.20 | **73.70±2.14** | 65.21±1.60 | 71.76±3.28 | **73.53±7.49** | 66.89±3.13 | 71.34±1.95 |
The reported results suggest that our method outperforms most other baselines in various backbones.
---
2.
We will provide an example using a classification task on molecular graphs.
Subgraph Extraction Layer presents to the user the important components (i.e., atoms) that had a significant impact on predicting the target molecule into the corresponding class. In other words, this explanation includes the subgraph structure of the atoms within the target molecule that play a significant role in the prediction.
Prototype Projection provides the process through which the model predicts the target molecule into the corresponding class. The prototype shows the learned knowledge of the model (such as molecular structures learned during training time) that was utilized for predicting the target molecule. Specifically, since each prototype is projected onto the nearest training graph, we can identify the training graph that had the most influence on predicting the target molecule through the prototypes. For example, if the model's predicted probability for a specific molecule belonging to class c is high, it demonstrates to the user that the model considered a specific subgraph within a particular training molecule to be essential for class c, and that the prediction was made based on the presence of this subgraph within the target molecule. The purpose of such explanations is to present the model's reasoning process for molecular graphs in a way that is understandable to humans.
As a result, PGIB aims to provide both interpretability from the perspective of input molecular graphs and the model's reasoning process, which can play a crucial role in enhancing the user's understanding and confidence in the model.
We hope this explanation adequately addresses your question.
We appreciate your quick response once again. | Summary: Interpretable graph learning can promote the use of graph-based scientific applications by providing model explanations. This paper focuses on extracting key subgraphs and employing a case-based reasoning process (also known as prototype learning) for model prediction. To make the extracted substructures more informative, the information bottleneck theory is incorporated into the subgraph extraction. Several strategies, such as prototype merging and connectivity constraints, are used to improve the model's interpretability and accuracy. Finally, this paper provides both qualitative and quantitative analyses of the model's interpretability and performance.
Strengths: 1. The idea of using prototype learning to explain key substructures of the input graph is intuitive and reasonable.
2. The extension of information bottleneck theory to prototype learning is inspiring and has practical utility in a wide range of application scenarios. In particular, it can help compress excessive information from the input graph and provide more fine-grained explanations, including key subgraphs and representative examples.
3. The paper is well-organized and easy to follow.
Weaknesses: 1. The related works are not properly cited. For example, the prototype merging method is quite similar to the one used in ProtoPShare [1], yet the author simply claims that "We propose a method to effectively merge the prototypes" (Line 224), without mentioning any existing works. This can confuse readers about the technical contribution.
[1] Rymarczyk, Dawid, et al. "Protopshare: Prototypical parts sharing for similarity discovery in interpretable image classification." Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021.
2. The paper's experiments are not sound enough. Firstly, the paper should include at least one black-box GNN model for performance prediction. Secondly, when evaluating the fidelity scores, ProtGNN should not be excluded (Section 4.3). Third, the paper is missing ablation studies for some proposed components, such as Connectivity loss.
3. The design of dropping the first term in equation 9 (in Section 3.3.2) is not well-motivated. The goal is to minimize the whole $I(Y;\mathcal{G}p|\mathcal{G}{sub})$ instead of the single term $I(\mathcal{G}p ; \mathcal{G}{sub})$. The paper should provide essential theoretical analysis to better illustrate the effectiveness and rationality of this optimization goal.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is the physical meaning of $p_i$ in Eq. 5? It seems to represent the extent of interference on the node representation. It is unclear how this practice can be considered as selecting a subgraph, making the physical meaning of connectivity loss also unclear.
2. Why adding interference to the nodes according to the probability $p_i$ yields the representation of the subgraph? Is the subgraph representation obtained in this way equal to $f_g(G_{sub})$?
3. In Eq. 8, how is the first inequality derived?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have discussed the limitations and societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1** Thanks for raising this issue. In fact, we are aware of ProtoPShare, and thus we cited it in Sec 3.5 Line 223 reference “[15]”. While ProtoPShare is the first work that applies prototype merging in image classification, our work is the first work to demonstrate the effectiveness of prototype merging in graph-structured data. We fully agree that the sentence in Line 224 “We propose ... ” may have confused the reviewer, and we will make sure to revise along with an explicit citation: “Inspired by ProtoPShare “[15]”, we propose a prototype merging technique for graph-structured data..”
**W2**
-**Black-Box GNN**
Find results attached in pdf (Tab P1). We observe our model outperforms black-box GNNs.
-**Fidelity Scores of ProtGNN**
We would like to clarify that measuring the fidelity score of ProtGNN is not possible. We explain reasons:
- First, recall that our model provides interpretability in two aspects: 1) model’s prediction based on the substructure of the input graph (i.e., $G_{sub}$) (Sec 4.3), and 2) model’s reasoning process based on the visualization of learned prototypes (i.e., $G_p$) (Apx 4.4).
- Moreover, fidelity score quantifies the extent to which explanations accurately capture the important components that contribute to the model prediction (Line 307), meaning that fidelity score is measured based on $G_{sub}$ and not $G_p$.
- However, ProtGNN is method for explaining the model’s reasoning process based on $G_p$, and it does not extract key subgraph $G_{sub}$. Hence, its fidelity score cannot be measured.
We may still consider using $G_p$ to compute the fidelity score. However, as $G_p$ is obtained by projecting prototypes onto the training graphs, $G_p$‘s produced by different methods would be projected onto different training graphs, which means that different graphs are to be compared for same target. This variability makes it challenging to fairly compare fidelity scores across different graphs. Therefore, we only compare fidelity scores with methods that provide interpretability for $G_{sub}$ (GIB, VGIB, and PGIB).
-**Ablation**
Experiment on connectivity loss is shown in A.4.1. To provide further clarity in our analysis of PGIB, we conducted an ablation study including the connectivity loss at a more fine-grained level in Table R1, R2, and Table PDF-2. Additionally, interpretation visualization of $G_{sub}$ varying with Connectivity Loss is presented in Figure PDF-3.
**W3**
Note that the two terms in Eq 9 originate from $-I(Y;G_{sub})$, which is the 1st term in Eq 3. That is, minimizing $-I(Y;G_{sub})$ aims to increase the information regarding $Y$ in $G_{sub}$, and thus the two terms in Eq 9 both aim to increase the information regarding Y in $G_{sub}$. Given this fact, it is intuitive that minimizing the 1st term in Eq 9, i.e., $I(G_p;Y,G_{sub})$, aims to reduce the information of Y from $G_p$, which eventually increases the information regarding Y in $G_{sub}$. In other words, $I(G_p;Y,G_{sub})$ includes removing the quantity of Y from $G_p$, because it solely aims to maximize $I(Y;G_{sub})$. However, our goal is to establish an interaction between $G_p$ and $G_{sub}$ to enhance both performance and interpretability of reasoning process. For this reason, we excluded the 1st term.
**Q1**
**-Eq.5**
Eq. 5 controls the degree of interference in the node representation. $p_i$ is learned from node representation $h_i$ through MLP, and it attenuates the information of $G$ by injecting noise into node representation. ϵ is the noise sampled from parametric noise distribution. We assign each node a probability of being replaced by noise ϵ, and $p_i$ controls the information transmitted from $h_i$ and ϵ to $z_i$, e.g., when $p_i$ = 1, all information from $h_i$ is transmitted to $z_i$. Conversely, when $p_i = 0$, $z_i$ contains only noise ϵ and does not include any information from $h_i$. In short, we expect unimportant nodes to be replaced by noise ϵ, while remaining nodes constitute an important subgraph.
**-Connectivity Loss**
It aims to minimize the number of isolated nodes in $G_{sub}$, and promote connected subgraphs in order to enhance interpretability of $G_{sub}$. Specifically, $S_g$[j, 0] and $S_g$[j, 1] denote the probability of node $v_j$ ∈ $V_g$ belonging to $G_{sub}$ and $\bar{G}_{sub}$, respectively.
$a_{11}=\sum_{i,j}A_{ij}p(V_i\in G_{sub}|V_i)p(V_j \in G_{sub}|V_j),$
$a_{12}=\sum_{i,j}A_{ij}p(V_i\in G_{sub}|V_i)p(V_j \in \bar{G}_{sub}|V_j).$
For example, we use $a_{11}$ and $a_{12}$ to denote the element (1,1) and the element (1,2) of $S^TAS$. Minimizing Lcon causes a11/(a11+a12) to approach 1, indicating that if Vi is in Gsub, its neighbors also have a high probability to be in $G_{sub}$. On the other hand, minimizing $L_{con}$ causes $\frac{a_{12}}{a_{11}+a_{12}}$ to approach 0, indicating that when $V_i$ is in $G_{sub}$, its neighbors have a low probability to be in $\bar{G}_{sub}$.
In other words, $L_{con}$ aims to minimize the number of isolated nodes in a subgraph $G_{sub}$ by adjusting the node selection probabilities based on the connectivity information of $G$, which leads to stable connectivity.
**Q2**
Refer to our answer to Q1 for why adding interference yields the representation of the subgraph. Moreover, since we adopt the max pooling for $f_g$, the subgraph representation obtained in this way is equal to $f_g(G_{sub})$.
**Q3**
By definition we can derive,
$I(Y;G_{sub},G_p) = E_ {Y,G_{sub},G_p} [\log p(Y|G_{sub},G_p)-p(Y)]$,
$I(Y;γ(G_{sub},G_p)) = E_{Y,G_{sub},G_p} [\log p(Y| γ(G_{sub},G_p))-p(Y)]$.
Additionally it is known that the following holds due to information loss in mutual information [1].
$I(Y;G_{sub},G_p) ≥ I(Y; γ(G_{sub}, G_p))$
Therefore, we can obtain following inequality:
${E}_ {Y,G_{sub},G_p}[\log p (Y |G_{sub}, G_p)] - E_Y [\log p(Y)] ≥ E_{Y,G_{sub},G_p} [\log p(Y |γ(G_{sub}, G_p))]−E_Y[\log p(Y )]$
[1] Learning invariant graph representations for out-of-distribution generalization. NIPS22
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. As my major concern is about missing experiments and the authors have added them, I'm happy to raise the score. | Summary: The paper introduces a new design of interpretable graph neural networks (GNNs) that combines ideas from information bottleneck approaches and prototype based approaches for by-design interpertation. The system titled PGIB extracts a subgraph from a given input graph and compares the embedding of the extracted subgraph in latent space to a set of learnt prototype subgraphs for final prediction. The authors evaluate their method on multiple datasets and baselines for classification performance and fidelity and demonstrate their efficacy for both.
Strengths: 1. The presented idea is novel. I have not seen any previous approach combining information bottleneck principles with prototype based classification.
2. The choice of baselines for comparison and overall experimental setup is solid. The main results in the paper are also convincing and positively reflect for the method.
3. The paper is generally well written with clear motivations.
Weaknesses: 1. The paper while reading gives sense of having too many moving parts.
2. Hyperparameter selection is understudied and feels arbitrary. The ablation studies should be strengthened.
3. Interpretation analysis -- Various choices in the system (loss weights or otherwise) are not well studied from the lens of interpretability. There are comments made around $\alpha_1, \alpha_2$ in Sec 4.4 but are not very well supported, with sparing analysis in appendix. At the very least for multiple choices you should include qualitative results specially for the ones for which you have some insights.
Overall it's a decent paper but various choices the authors make need to be better supported experimentally.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. **Hyperparameter selection:** -- I am a little puzzled by the hyperparameter choices selection. For eg. is the performance drop in Fig. 6(a) "drastic". It is just 2-3% drop in mean performance throughout from $\alpha_1=0$ to $1$. Am I understanding the graph incorrectly? The behaviour you highlight in A.4.3 is expected but I am rather thinking how is the performance still so high if the subgraphs are too compressed for $\alpha_1=1$? Similarly $\alpha_2$ choice also seems arbitrary based on the Fig. 6(b) with even less performance variation.
2. **Ablation** -- The number of prototypes and parameters for merge operation. How were they determined? How do they affect the interpretability and performance?
3. Typos in line 126 ("is maximizes"), line 312 ("Fidlity"), Fig 4 ('sparcity')
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Authors discuss these in supplementary. I would prefer to have the limitations in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ---
**Weaknesses**
**W1.** Too many moving parts.
**A1.** Thank you for your valuable feedback. We will make adjustments to the placement of the figures and tables to ensure they are easy to follow in our paper.
---
**W2.** Hyperparameter selection / Ablation studies.
**A2.** We sincerely appreciate your feedback. To provide further clarity in our analysis of PGIB, we conducted an ablation study at a more fine-grained level than what was initially presented in the paper. We will include these additional results and analysis to provide a better understanding of our model. Here are the results and the corresponding analysis:
**Table R1.**
| **$\alpha_1$** | **0** | **0.0001** | **0.001** | **0.01** | **0.1** | **1** |
|:--------------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:----------:|
| PROTEINS | 76.8±3.16 | **77.50±2.42** | 75.72±2.32 | 75.54±3.04 | 73.75±5.37 | 73.48±4.25 |
| NCI1 | 75.21±1.41 | **78.25±2.13** | 73.75±3.04 | 74.09±3.15 | 72.55±2.55 | 69.09±2.18 |
**Table R2.**
| $\alpha_2$ | 0 | 0.0001 | 0.001 | 0.01 | 0.1 | 1 |
|----------|------------|----------------|------------|------------------|------------|------------|
| PROTEINS | 75.98±2.84 | 76.43±2.89 | 76.25±2.23 | 76.16±4.47 | **77.50±2.42** | 75.18±2.10 |
| NCI1 | 74.79±1.59 | 76.16±2.92 | 75.84±2.91 | **78.25 ± 2.13** | 76.71±2.80 | 72.21±1.96 |
Table R1 and R2 show that it is important to select appropriate parameters, as $\alpha_1$ has a significant influence on the degree of compression of the subgraph, and $\alpha_2$ plays an important role in learning the prototype, respectively.
**Table R3.**
| The number of prototypes | 3 → 1 | 4 → 2 | 5 → 3 | 6 → 4 | 7 → 5 | 8 → 6 | 9 → 7 |
|--------------------------|-------------|------------|------------|------------|----------------|----------------|------------|
| MUTAG | 81.5±3.91 | 82.0±6.4 | 84.5±4.7 | 85.0±5 | **85.5±5.22** | 84.5±3.5 | 85.0±4.47 |
| PROTEINS | 70.18.±2.19 | 71.7±3.02 | 74.4±2.11 | 75.26±2.82 | **77.5±2.42** | 76.25±2.37 | 75.27±3.39 |
| NCI1 | 74.97±1.68 | 77.1±1.80 | 76.7±2.00 | 77.59±1.66 | 78.25±2.13 | **78.81±1.84** | 77.0±1.79 |
| DD | 69.50±2.52 | 73.78±3.94 | 74.79±3.19 | 75.20±3.12 | **76.13±3.76** | 74.04±2.77 | 76.04±2.74 |
Table R3 shows the importance of selecting a sufficient number of prototypes. For example, 4→2 indicates 4 prototypes per class, which are then merged to 2 prototypes per class. Since the number of prototypes determines the diversity of prototypes, a small number of prototypes hinders the formation of diverse prototypes.
---
**W3.** Interpretation analysis
**A3.** Thank you for your valuable feedback. We have attached a pdf file containing interpretation analysis to the global response. We hope this response adequately addresses your concern.
---
**Questions**
**Q1.** Hyperparameter selection
**A.** Thank you for sharing your insights on our experiments. To further clarify our analysis of Figure 6 and A.4.3, we have extended the analysis for $\alpha_1$ and $\alpha_2$ and also included performance evaluation using additional dataset.
- Regarding the term “drastic” in Figure 6(a)
- We understand that it might be subjective. We want to emphasize that similarly to other GIB methods [1,2], the overall performance of our method is primarily influenced by the classification loss. This explains why the performance remains relatively high even when the value of $\alpha_1$ is high. In the context of $\alpha_1$, the term "drastic" refers to the relative impact on the performance within a specific range of $\alpha_1$ values. We have observed a similar tendency in the results of additional dataset as well in Table R1.
- Regrading the Less Variation in Performance in Figure 6(b)
- As we mentioned before, the overall performance of our method is mainly affected by the classification loss. To conduct a more comprehensive analysis of the performance variation influenced by $\alpha_2$, we conducted experiments on an additional dataset in Table R2. These results demonstrate that although it exhibits less variation in performance compared to the effect of $\alpha_1$, it still clearly illustrates the performance gap between the model with an appropriate $\alpha_2$ and the model without it.
- Furthermore, we want to emphasize that $\alpha_2$ significantly impacts the interpretability of the reasoning process in the Figure PDF-1. As we addressed in the global response, $\alpha_2$ plays a crucial role in ensuring the interpretability of prototypes, thereby enhancing our model's overall explainability.
**Q2.** Ablation
**A.** Thank you for raising this important point and for your valuable suggestion. As mentioned earlier, in Table R3, we have conducted additional experiments where we varied the number of prototypes per class and the final number of prototypes after merging them.
Additionally, we conducted interpretation visualizations of $\mathcal{G}_ {p}$ based on the number of prototypes in Figure PDF-4. When the number of prototypes is small, the prototypes do not contain diverse substructures. This limitation arises due to the necessity of making predictions using a restricted number of prototypes. On the other hand, if the number of prototypes is large, a greater diversity of prototypes can be achieved because various and complex information can be obtained from $\mathcal{G}_ {sub}$.
[1] Yu, Junchi, et al. "Graph information bottleneck for subgraph recognition." arXiv 2020
[2] Yu, Junchi, et al. "Improving subgraph recognition with variational graph information bottleneck." CVPR 2022
**Q3.** Typos
We appreciate your attention to the overlooked typos. We will make sure to fix them for the revised version.
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledgement
Comment: Thank you for the rebuttal!
I also want to request the authors if the can add curves for training loss or validation/test performance in the appendix for different hyperparameter settings. I am interested to see how the loss functions optimize and if some balance between $\alpha_1, \alpha_2, \alpha_3$ can also be achieved besides tracking the performance.
In light of the new experiments, and analysis, I'd like to raise my score from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Response by authors
Comment: We thank the reviewer for acknowledging our effort, and for deciding to raise the score.
We provide our test performances according to $\alpha_1$, $\alpha_2$, and $\alpha_3$ in our model. We present the performance based on a few epochs in a simple table format, since we cannot provide you with figures in the discussion phase.
| | | | **$\alpha_1$** | | | | | | | **$\alpha_2$** | | | | | | | **$\alpha_3$** | | |
|:---------:|:------:|:----------:|:--------------:|:--------:|:-------:|:------:|:---------:|:------:|:----------:|:--------------:|:--------:|:-------:|:------:|:---------:|:------:|:------:|:--------------:|:------:|:------:|
| Epoch | **0** | **0.0001** | **0.001** | **0.01** | **0.1** | **1** | Epoch | **0** | **0.0001** | **0.001** | **0.01** | **0.1** | **1** | Epoch | **0** | **1** | **3** | **5** | **7** |
| 1 | 0.4414 | 0.4414 | 0.4685 | 0.4685 | 0.4865 | 0.4775 | 1 | 0.4505 | 0.4414 | 0.4505 | 0.4414 | 0.4414 | 0.4595 | 1 | 0.3604 | 0.3694 | 0.3604 | 0.3694 | 0.3604 |
| 10 | 0.6036 | 0.6036 | 0.5856 | 0.4775 | 0.5315 | 0.4234 | 5 | 0.4595 | 0.4234 | 0.4685 | 0.4324 | 0.4955 | 0.4505 | 5 | 0.4865 | 0.4955 | 0.4775 | 0.4685 | 0.4775 |
| 20 | 0.6577 | 0.6667 | 0.6667 | 0.5946 | 0.6036 | 0.5676 | 10 | 0.5676 | 0.5766 | 0.6036 | 0.5856 | 0.5946 | 0.5856 | 10 | 0.5676 | 0.5856 | 0.5856 | 0.5946 | 0.5495 |
| 25 | 0.7117 | 0.7207 | 0.7207 | 0.6667 | 0.6396 | 0.5676 | 15 | 0.5766 | 0.6216 | 0.6036 | 0.6216 | 0.6216 | 0.6036 | 15 | 0.5856 | 0.6126 | 0.5766 | 0.6036 | 0.5856 |
| 35 | 0.7477 | 0.7477 | 0.7207 | 0.7117 | 0.6577 | 0.5135 | 25 | 0.6757 | 0.6757 | 0.6757 | 0.6667 | 0.7027 | 0.6667 | 20 | 0.6847 | 0.6577 | 0.6937 | 0.7207 | 0.6847 |
| 45 | 0.7477 | 0.7658 | 0.7207 | 0.7387 | 0.6126 | 0.5495 | 30 | 0.6937 | 0.6937 | 0.7297 | 0.6847 | 0.7387 | 0.7117 | 35 | 0.6757 | 0.6937 | 0.6847 | 0.7387 | 0.7207 |
| 50 | 0.7568 | 0.7568 | 0.7207 | 0.6757 | 0.6577 | 0.5405 | 35 | 0.6937 | 0.7027 | 0.7027 | 0.7027 | 0.7477 | 0.7207 | 40 | 0.7117 | 0.7477 | 0.7387 | 0.7477 | 0.7207 |
| 55 | 0.7387 | 0.7928 | 0.7568 | 0.6667 | 0.6306 | 0.5315 | 40 | 0.7387 | 0.7658 | 0.7477 | 0.7297 | 0.7748 | 0.7568 | 50 | 0.7207 | 0.7387 | 0.7568 | 0.7658 | 0.7477 |
We will consider your suggestion to include more detailed curves for various hyperparameters in the revised version of the appendix. We have discovered the optimal values of $\alpha_1$, $\alpha_2$, and $\alpha_3$ through various hyperparameter searches to achieve balance.
We appreciate your valuable suggestions on our paper. | Summary: This paper investigates the usage of prototype learning for GNN explainability, focusing in particular on identifying key subgraphs through the graph information bottleneck principle.
Extensive experiments are conducted, which consider several baselines and different molecular datasets.
Strengths: - The scope of the paper is well-defined, and the goal of the novel method is relevant.
- The method is well described and motivated, and the code has been made available to improve reproducibility.
- The baselines include diverse, recent SOTA methods. Different real-world datasets have been used. Overall, the experiments appear robust.
Weaknesses: - The main weakness in my opinion is that it is hard to conclude that the proposed method consistently outperforms all the baselines across all the metrics, given the reported results. For example, are the results in Table 1 statistically significant, given how large the confidence interval is? Similarly, in Table 2 for F+, VGIB reports the same results (why they are not bold?).
I think that the proposed method clearly has an advantage for some properties/metrics, but the complexity of the results should be better discussed n the text. I suggest (1) extending the discussion about the limitations of the model and (2) extending the analysis to investigate in what contexts (and why) the model significantly outperforms the baseline.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: See previous point.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See previous point on better discussing the limitations of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ---
**Weaknesses**
**Q1.** The main weakness in my opinion is that it is hard to conclude that the proposed method consistently outperforms all the baselines across all the metrics, given the reported results. For example, are the results in Table 1 statistically significant, given how large the confidence interval is? Similarly, in Table 2 for F+, VGIB reports the same results (why they are not bold?).
**A.** Thank you for your valuable feedback. To address your concern, we have conducted a statistical analysis, specifically a paired t-test (n=10), providing the p-value and the confidence interval. We perform a t-test between our methods and the runner-up baseline. Here are the results of the experiments.
---
| Table 1 (Paired t-test) | | | |
|:-----------------------:|:--------:|:-------:|:-----------:|
| Dataset | baseline | p-value | CI |
| MUTAG | VGIB | 0.0606 | (0.0, inf) |
| PROTEINS | GIB | 0.0080 | (0.01, inf) |
| NCI1 | ProtGNN | 0.0006 | (0.02, inf) |
| DD | VGIB | 0.0962 | 0.0962 |
We established a confidence level of 95% and an alternative hypothesis indicating that the mean differences between our method (PGIBcont) and the baseline are greater than 0. With the exception of MUTAG and DD datasets, the p-values are sufficiently small to reject the null hypothesis, while the two datasets (MUTAG and DD) still exhibit values that are relatively close to the significance level. For the MUTAG dataset, which inherently contains a small number of data points (188 instances), the outcomes tend to exhibit a larger standard deviation (as indicated by [1], [2] ). In DD dataset, IB based methods tend to have a large standard deviation. However, we would like to highlight that our method does not exhibit such tendencies even if it is based on IB. We attribute this to the consideration of various prototypes beyond merely the substructures of the input graphs.
Regarding Table 2 for F+, we apologize for any confusion that we may have caused. Thank you for bringing this to our attention. We will make sure to fix them for the revised version.
[1] Sui, Y., Wang, X., Wu, J., Lin, M., He, X., & Chua, T. S. Causal attention for interpretable and generalizable graph classification. KDD 2022
[2] Yu, J., Xu, T., Rong, Y., Bian, Y., Huang, J., & He, R. (2020). Graph information bottleneck for subgraph recognition. arXiv preprint arXiv:2010.05563.
---
**Q2.** I think that the proposed method clearly has an advantage for some properties/metrics, but the complexity of the results should be better discussed n the text. I suggest (1) extending the discussion about the limitations of the model and (2) extending the analysis to investigate in what contexts (and why) the model significantly outperforms the baseline.
**A.** Thank you for your valuable suggestion regarding further discussions on advantages and limitations of our proposed method.
- **A further limitation of our model**: Beyond the limitation we mentioned in A.7, the performance of our proposed method can be compromised when graphs in the dataset inherently lack task-relevant substructures (e.g., social network data, traffic network data, etc). This is mainly because the key underlying assumption of our model is the existence of task-relevant important substructures.
- **When and why our method performs well**: Conversely, our proposed method demonstrates strong performance when the dataset comprises explicit label-representing substructures, as seen in molecular graphs, due to its capacity to capture subgraphs and conduct predictions. Additionally, the complex correlation between the substructure and various prototypes can be actively used for label prediction, which can improve both label prediction performance and interpretability.
We thank the reviewer for the valuable suggestion, and we will make sure to include the above discussions in the revised paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the response, which helps clarify my questions.
Overall, I confirm my acceptance rating. | Rebuttal 1:
Rebuttal: We appreciate the reviewers for their valuable comments on our paper. We have conducted additional qualitative analysis, which is provided in the attached PDF. In this analysis, we compare the effects of different choices for $\alpha_1$, $\alpha_2$, $\alpha_3$ (i.e., loss weights) and the number of prototypes at a more fine-grained level.
- Impact of $\alpha_1$
As discussed in Section 4.4, the hyperparameter $\alpha_1$ has an impact on the compression of the subgraph from the entire input graph. In Figure PDF-2, when $\alpha_1$ is less than 0.01, certain parts of the key substructure, specifically NO2 in this case, are excluded, resulting in a decrease in both interpretability and performance.
- Impact of $\alpha_2$
We have extended the scale of qualitative analysis on $\alpha_2$ shown in Figure 5 to provide a better understanding of its impact. It is crucial that the subgraphs of prototypes not only contain key structural information from the subgraph found by IB but also ensure a certain level of diversity, since each class is represented by multiple prototypes to enhance the model's capacity.
In Figure PDF-1, when we fix $\alpha_1$ to 0.1 the diversity of prototypes varies based on the degrees of $\alpha_2$. Specifically, when $\alpha_2$ becomes 1, the diversity of prototypes decreases, leading to a decline in the interpretability of the reasoning process and the overall model performance. This finding highlights the importance of selecting proper $\alpha_2$ to ensure both interpretability and performance are optimized.
- Impact of $\alpha_3$
The hyperparameter $\alpha_3$ is associated with the connectivity loss, which plays a crucial role in the interpretability of $G_{sub}$ by promoting a compact topology. In real-world datasets, the key substructure often tends to form non-connected components without $\alpha_3$. In Figure PDF-3, when we exclude the connectivity loss from the final loss function (i.e., set $\alpha_3$ to 0), $G_{sub}$ tends to consist of multiple connected components. As a result, due to the wide and scattered range of detected subgraphs, the absence of connectivity loss results in the formation of unrealistic subgraphs.
We hope that this material highlights the contributions of our research and provides a better understanding of our work.
Pdf: /pdf/6f7f3b363da59617e8603eaa312f94e16567acab.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DELIFFAS: Deformable Light Fields for Fast Avatar Synthesis | Accept (poster) | Summary: The paper introduces a method for real-time rendering of animatable full-body avatars. The main contribution is a hybrid mesh-lightfield representation: two surfaces are produced with an off-the-shelf motion-to-surface model, and normals of those surfaces are then used as input to two UNet, the output of those is fit into the lightfield MLP as features. Method is near real-time and runs at 30FPS on a GPU for 1K resolution. Quantitative and qualitative results over recent baselines demonstrate that the method is on par with state-of-the-art.
Strengths: - Paper is well-written and is easy to follow.
- Proposed method seems technically sound, and the two-layer surface formulation fits really well with the lightfield-based neural rendering.
- Apart from the quality, proposed formulation runs at 31fps for a 1K resolution image, and only requires 1 MLP evaluation per ray.
- Quantitative and qualitative evaluation is thorough, the choice of baselines is reasonable (although there are some potential issues, please see below). Ablation study is present and seems to suggest that the proposed two-layer formulation is less sensitive to the quality of the mesh (which can be considered a contribution in itself).
Weaknesses: - There are not too many details on how the deformed mesh is obtained, but assuming it is only conditioned on the motion, it is unclear whether providing normals of the resulting mesh should be sufficient to reconstruct the full image: it would seem that the state of person's clothing cannot be fully described by the pose or sequence of poses.
- On a similar note, it looks like the underlying mesh actually seems to have some measurable effect for the method performance: judging from numerical results in Table 2)? From Figure 6, it is unclear if the quality comparison is done for both models trained with perceptual loss or not? Yet, authors do not really provide any details on how to build the underlying motion-to-mesh model.
- Claim "This allows perceptual supervision ... compared to previous approaches" is incorrect, e.g. [Habermann'15], [Bagautdinov'21], DVA [Remelli'22] should support full-image supervision with arbitrary perceptual losses. Generally, it is a bit unclear how much of the qualitative improvements are coming from the use of perceptual loss (which I would not consider a contribution of this work).
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Does the mesh model take as input anything apart from the pose? Would it be sufficient to capture non-pose-dependent deformations?
- Do you expect the quality of the method to degrade on grazing angles?
- Is the number of parameters between a single- and two- layered model the same? In the two-layer case, there are two UNets, which could also explain the boost in the performance?
- Could you please confirm that the qualitative comparison (Figure 6) is done consistently for both single- and two-layer models trained with/without perceptual loss? Might be a good idea to be more explicit about this in Table 1 as well.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Authors did not really discuss limitations.
Suggestions:
- inability to capture realistic dynamics due to information-defficient inputs.
- artifacts on grazing angles due to incoherent features between the two layers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback to further improve our work. Please see our visual and quantitative ablations in the global response-rebutal.pdf.
* * *
**Details about the mesh**
The method we use to compute the deformed mesh is DDC [1] and it is only conditioned on the motion history. We will include more details about it in the supplementary material. It is true that the normal map is not sufficient enough to fully describe the subject’s clothing state, which can result in rather blurry images. This is why we employed perceptual supervision so that we can recover the fine details.
* * *
**Quality of mesh**
Our performance degrades if a coarser mesh is used. However, note that our method with coarse geometry or even SMPL mesh can still recover the real geometry compared to the single surface baseline where the system can only paint on the given mesh (see main paper-Fig.6 and rebuttal.pdf-Fig.2,4). Also, it would be an interesting research direction if we can jointly refine the underlying mesh during training and improve the rendering quality.
The mesh model we used is DDC. Alternatively, one can use the SMPL model (see rebuttal.pdf-Fig.4) or SMPL+D to deal with the loose clothing. Also, we can use recent Avatar NeRF works to compute the canonical mesh (template mesh) and then use skinning-based deformation to obtain the deformed mesh for the new pose. We will include more details about DDC and other alternatives in the supplementary material.
For Fig.6, both models are trained without a perceptual loss. We included the result only with L1 supervision for the ablations because otherwise it would be unclear whether the proposed two-surface representation or the perceptual supervision improves the result quality. We will make this more clear in the revision. We also included ablations with perceptual supervision in the rebuttal document (see rebuttal.pdf-Fig.1,Fig.2,Tab.1)
* * *
**Claim about perceptual supervision**
We did not claim that we are the only work that employs the perceptual supervision on the entire image. We claimed that we can employ it thanks to our fast rendering speed while coordinate-based MLP methods (e.g., NeRF methods) cannot or can only employ perceptual supervision on the small patch due to its slow rendering speed. As shown in the supplementary material-Fig.1 and rebuttal.pdf-Fig.1, the perceptual supervision on the entire image allows recovering the fine details.
* * *
**Input to the mesh model**
The only input to the deformation surface model is the pose history. This is not sufficient to capture fine details, and this is why we employ perceptual supervision to represent such details.
* * *
**Grazing angles**
We have already shown the rendering results on the grazing angles in the supplementary material video (00:14-00:30, 00:45-00:54, 02:17-02:27), where the camera is freely rotating around the subject. However, we have found no notable artifact or quality degradation on grazing angles.
* * *
**Number of parameters**
The number of total parameters is larger in the two-surface model as it includes two separate U-Nets for extracting feature map from each surface while the single-surface model only has single U-Net. The U-Net and MLP architecture used in both single and two-surface model are the same.
To verify that the performance boost is coming from the two-surface design, we additionally trained a “single-surface with two U-Net” variant (see rebuttal.pdf-Fig.5,Tab.1-c,i). Even though now the network size is the same, our two-surface representation still outperforms the single-surface baseline.
* * *
**Consistent experiments in Figure 6 and Table 1**
The number reported in Table 1 (comparison with other works) is computed with our final model with full supervision (two-surface + L1 + perceptual). The variants in Table 2 (ablation study) are all trained with L1-only supervision except our final model (Tab.2-d). The qualitative results on smooth mesh in Fig.6 is also trained with L1-only supervision for both the single and two-surface models. Again, excluding the perceptual supervision from the ablation study is to verify the effectiveness of our two-surface design. Lastly, we additionally verify the effect of perceptual supervision on the entire image in the supplementary document Fig.1. We will make it more clear in the text and figure caption that the results in Fig.6 are generated with models trained only with L1 supervision.
Also, we included the ablation results with perceptual supervision as well in the rebuttal.pdf-Fig.1,2,Tab.1. Similar to the results without perceptual supervision, our full model outperforms other variants.
* * *
**Limitations**
Thank you for the suggestion. Regarding the limitation 1, it is true that the normal map is not sufficient enough to fully describe the subject’s clothing state which can result in rather blurry images. This is why we employed perceptual supervision so that we can recover the fine details. However, it would be a promising direction to adopt the generative models (GAN, VAE, …) to generate even more photorealistic details. Regarding the limitation 2, although we have found no notable artifacts on the grazing angles (as can be seen in the free-viewpoint rendering result in the supplementary video), it would be an interesting research direction to further refine the designs for the corner cases. We will include these in the limitations section of the revision.
[1] Habermann et al. “Real-time deep dynamic characters.” In TOG 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I would encourage authors to reformulate the writing to make the point about perceptual loss more clear. I think fundamentally only conditioning on pose history is not enough no recover the true underlying details, but I do see that perceptual metric is a better choice than just L2 to tackle this (although adversarial training would make more sense tbh) - but not sure if this is a particularly novel observation which should be claimed as a contribution.
Overall though, I think the two-layer lightfield representation proposed in this work is a good idea, and thus I stand by my original rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer qBLT
Comment: Thank you for taking your time to review our work. We will make the claim about perceptual supervision more clear in the final version. Moreover, we agree that modeling the non-pose dependent effects more explicitly would be an interesting venue for future work. | Summary: This work proposes a method for human avatar reconstruction from multiview video data. This work leverages deformable light fields to model the geometry and texture. Experiments show that this method outperforms the existing methods in terms of novel view and novel pose synthesis.
Strengths: 1. The idea of using light field to improve computation efficiency is novel and well-motivated. It can overcome some of the drawbacks of NeRF, such as the heavy computation cost for point sampling.
2. Using mesh as geometry representation is also reasonable as it is naturally compatible with conventional rendering engines, which could open up many application possibilities.
Weaknesses: 1. Although the framework is novel, the idea of using differentiable rasterization for animatable human avatar creation has already been explored in prior work[1], please explain the difference between this work and [1].
2. There may exist a risk of overfitting because this work only experiments on one dataset, it could be better to add more experiments on the commonly used benchmark dataset such as ZJU-Mocap.
3. Efficiency is one of the most important contributions of this work, the authors could report more numerical results to verify its rendering speed, and it’s better to compare it with [1].
4. The setting for ablation study (line 269) is confusing, I don’t understand why other variants are not trained with perceptual loss. I think this loss term should not be skipped except for studying the necessity of optimization objectives.
[1] UV Volumes for Real-time Rendering of Editable Free-view Human Performance
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The training data are multiview videos, is the proposed method possible to be trained on monocular videos?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback to further improve our work. Please see our visual and quantitative ablations in the global response-rebutal.pdf.
* * *
**Difference to UV volumes**
Please note that Chen et al. [1] is a concurrent work presented in CVPR 2023, which was held after the NeurIPS submission. The difference between [1] and ours is as follows: [1] is a single surface-based method that is tightly bounded to the underlying mesh (optimized uv map). However, ours can go beyond the underlying mesh and recover the real geometry that lies between the two surfaces (see main paper-Fig.5,6, rebuttal.pdf-Fig.2,4). Also, [1] supervises the uv map with the DensePose prediction. Therefore, as mentioned in their limitation section, they can only handle clothing types that roughly fit the human body. On the other hand, ours can handle loose garments such as skirts. We will cite this work and discuss the differences in the revised version.
* * *
**Risk of overfitting**
DynaCap is also a well-established benchmark used by other works [2,3,4,5] and known to be significantly more challenging than ZJU-Mocap in terms of pose variety. For example, the test sequence comprises around 7000 frames with challenging poses. Further, our results show that methods that perform reasonably well on ZJU-Mocap obtain significantly more blurred results on DynaCap due to the increased pose variety (see main paper-Fig.4 NB, A-NeRF results).
* * *
**Numerical results for runtime performance**
We report the performance comparison with Chen et al. on a single A100 GPU.
Due to the limited time, we report the performance of Chen et al. according to their paper-Tab.4.
Chen et al. achieves 18 fps on the novel pose synthesis task while ours achieves 31 fps. Although there are some differences in the exact pipeline, the pipeline of Chen et al. and ours can be divided into three stages: (1) obtaining UV (2) feature map generation (CNN) (3) MLP forwarding to get the final color value.
**(1) Obtaining UV**
Chen et al. generates UV feature volume using sparse CNN (48.78ms). Density along the camera ray is computed (7.08ms). Then the image space uv feature map is volume-rendered (1.73ms). The UV mlp converts the feature into uv coordinates (1.53ms). This stage sums up to 59.12 ms.
We utilize GPU accelerated rasterizer to render the image space uv map, which takes 3.58 ms.
**(2) Feature map generation (CNN)**
Chen et al. takes 7.52ms to compute the neural texture map.
Our U-Nets generate normal feature maps in 18.85 ms.
**(3) MLP forwarding to compute the final color**
Chen et al. takes 1.60 ms to compute color using MLP.
We take 6.78ms to bi-linearly sample the feature corresponding to each pixel. And then the light field MLP takes 2.52ms to compute the color value.
In total, Chen et al. takes 68.23 ms and ours take 31.73 ms. We will include this comparison in the revision.
* * *
**Ablation study**
The reason why we included the ablation results generated only with L1 supervision is because otherwise it would be unclear whether the proposed two-surface representation or the perceptual supervision improves the result quality.
We also included the ablation results with perceptual supervision as well in the rebuttal.pdf-Fig.1,2,Tab.1. Similar to the results without perceptual supervision, our full model outperforms other variants.
* * *
**Monocular videos**
Although we could not try due to the limited time, we expect that the performance would naturally degrade as the system has less chance to observe the multi-view information. We will leave this as future work.
[1] Chen et al. “UV Volumes for Real-time Rendering of Editable Free-view Human Performance.” In CVPR 2023.
[2] Habermann et al. “Real-time deep dynamic characters.” In TOG 2021.
[3] Liu et al. “Neural actor: Neural free-view synthesis of human actors with pose control.” In TOG 2021.
[4] Zheng et al. “Structured Local Radiance Fields for Human Avatar Modeling.” In CVPR 2022.
[5] Peng et al. “Implicit Neural Representations with Structured Latent Codes for Human Body Modeling.” In TPAMI 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the clarifications. The additional numerical results have addressed most of my concerns about computation cost and ablation studies. However, the risk of overfitting has not been solved yet. I appreciate that the authors named some prior works [2,3,4,5] to prove that DynaCap is a widely used benchmark, and I agree with this. But I carefully checked [2,3,4,5] and find that all of them experiment on two or three different datasets or at least sample sequences from different datasets to avoid overfitting. Unfortunately, there are no results for the second dataset (neither quantitative nor qualitative) in the rebuttal. Thus, I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer LHHA
Comment: We thank the reviewer for the feedback and are happy to hear that most concerns are addressed by our rebuttal, which we will also carefully incorporate in our final version.
Concerning the overfitting, we agree that an additional sequence would further confirm the superiority of our design. Though, due to the limited time, it was not possible to obtain such an evaluation. However, we are happy to add more results of other datasets in the final version if requested.
For now, we would like to again highlight the complexity of the DynaCap dataset compared to other datasets, which makes overfitting nearly impossible. For example, DynaCap training sequences range about 17,000-19,000 frames, and testing sequences range about 7,000 frames including a variety of dynamic and challenging poses. And this thus already validates our method. Please note that ZJU-Mocap which the reviewer suggested only ranges about 300-1,000 frames, which is much less challenging. Moreover, we already show qualitative results on other subjects, which include a subject with challenging loose garments (i.e., skirts). This also qualitatively confirms our generalizability.
As the reviewer pointed out, we have addressed the reviewer’s other concerns. For the only concern about the evaluated datasets, we believe that the DynaCap dataset is more challenging in terms of motion diversity compared to other datasets and our improvements on the DynaCap dataset prove the superiority of our method. Considering these, we hope you could consider increasing your rating. | Summary: The paper introduces DELIFFAS, an innovative method for generating controllable and photorealistic digital human avatars in real-time. This system utilizes a deformable two-surface representation to parameterize a surface light field, deforming two surfaces according to a deformable mesh model, allowing the light field to be driven by skeletal motion. This approach significantly increases inference speed, enabling the entire image to be rendered during training, unlike previous methods. This faster rendering time also allows for full-image level perceptual supervision, in contrast to earlier approaches, which could only supervise individual pixels or small patches due to slow runtime.
Strengths: Key strengths of the paper:
- The proposed method DELIFFAS can generate controllable and photorealistic human avatars in real-time.
- The use of a deformable two-surface representation to parameterize a surface light field improves both accuracy and efficiency of appearance synthesis.
- The efficient design of the neural architecture and light field representation allows for integration into the graphics pipeline, resulting in real-time performance.
- Due to the high inference speed, the system can render the entire image during training and employ full-image level perceptual supervision.
Weaknesses: Some potential weaknesses of the paper:
- Limitations in Pose Variety: It is unclear how well the model would perform with extremely unusual or rare poses, or if it would be able to accurately generate and maintain details under these conditions.
- Dependency on Underlying Mesh Quality: While the authors have demonstrated that their method can tolerate coarser geometry, the accuracy and realism of the output may still depend on the quality of the underlying mesh.
- Robustness to Complex Clothing: While the authors mention that their method can handle "challenging loose garment types," there may be limitations when dealing with more complex or unusual clothing, especially if such examples are not well represented in the training data. This can be often the case where common clothing could exhibit highly glossy or specular lighting effects such as silk.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How does the DELIFFAS model perform when exposed to extreme poses or movements not well-represented in the training data? Are there specific types of movement or pose it struggles to accurately represent?
- Could you elaborate on the computational resources required for real-time implementation of the DELIFFAS model? A100 is a quite heavy GPU configuration, what happens if tested on other GPU configurations for the performance?
- How would the system handle more complex or unusual clothing types, especially those not well-represented in the training data?
- The method seems to rely heavily on the quality of the underlying mesh model.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Discussed above, in the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback to further improve our work. Please see our visual and quantitative ablations in the global response-rebutal.pdf.
* * *
**Pose variety**
Typically our method is robust to rather challenging poses, however, completely out-of-distribution poses such as handstand might not work since cloth deformation can be very large, i.e., shirt pulls down. We encourage the reviewer to look at the publicly available test set of the DynaCap sequences, which comprise a large variety of poses.
* * *
**Dependency on mesh quality**
Our performance can degrade if a coarser mesh is used. However, note that our method with coarse geometry or even SMPL mesh can still recover the real geometry compared to the single surface baseline where the approach can only paint on the given mesh (see main paper-Fig.6, rebuttal.pdf-Fig.2,4).
Also, it would be an interesting research direction if we can jointly refine the underlying mesh during training and improve the rendering quality. However, it is important to note that the deformable character model is also solely learned from multi-view video, thus, input assumptions are the same compared to competing methods.
* * *
**Robustness to complex clothing**
Very complex clothing (e.g., highly glossy garment) might be challenging, though, in contrast to many related works, we demonstrate that loose clothing can work while most other methods only operate under the assumption of tight clothing. We will discuss this in the limitations.
* * *
**Computational resources**
As we stated in the main paper L41-45, our method runs at 31 fps for rendering a single 1K image (940X1285) on a single A100 GPU with Intel Xeon CPU.
When tested on A40, we achieve the real-time speed of 26 fps. Also, please note that our method runs X100 faster than the SOTA methods NA and HDHuman when the same computing resource is given.
Furthermore, most of our computational time is coming from the U-Net feature map extraction (\~60%) and deterministic bi-linear sampling of features (\~20%). We use a standard U-Net architecture and Tensorflow’s bi-linear sampling, but more efficient architecture and sampling implementation can be explored to further reduce the runtime.
* * *
**Clothing not in training data**
Our method is a person/clothing specific method and we do not claim generalization to new clothing (as most previous work does). | Summary: This paper aims to learn digital human body avatars from multi-view videos. To achieve both photorealism and fast inference speed, the authors introduce a new representation based on a surface light field. The surface light field is attached to a drivable human mesh model, and is conditioned on skeleton motion. Such a representation can be easily defined on texture atlas, and efficiently rendered with standard rasterization pipelines. Overall, the proposed representation allows perceptual supervision and real-time rendering, resulting in superior performance over existing baselines.
Strengths: 1. The proposed method enables real-time rendering of full-body human avatars with a resolution of 1K. This is a nice property for many down-stream interactive applications.
2. The proposed representation further allows perceptual supervision on the full image, which is crucial for learning high-quality appearance details like cloth wrinkles.
3. All important implementation details are adequately discussed in the main paper and the supplemental document. For example, in line 179-180, the authors describe how they handle corner cases where a ray only intersects with the outer surface.
Weaknesses: 1. Although the proposed representation is based on a light-field defined by two deformable surfaces and demonstrates superior performance over the single-surface baseline Sec.4.3, it is still tightly bounded to the underlying mesh model. According to Line 149, the shell bounded by two surfaces has a thickness of only 3 cm, which is relatively small. This means that the underlying mesh model should be a close approximation of the real avatar surface. Therefore, I am suspicuous whether other human body templates (like SMPL) can be used in the proposed method, although the authors claim in Line 122 that "any other deformable surface representation could be used as well".
2. Although the proposed two-surface representation is nice for modeling dynamic texture, the geometric deformation is not modeled. In Line 118, the authors mention that they assume "a skeletal motion-depdent deformable mesh surface is given". This can be obtained for the task of novel view synthesis, as mesh tracking is a key step in data preprocessing. However, this is very challenging for novel pose synthesis, especially for loose garments like a long dress. Unfortunately, I can't find any details about how they obtain the mesh model for novel poses in the paper.
3. The authors claimed in Abstract that the combination of photorealism and inference speed "still remains unsolved", which I think is not true. Some recent works have already achieved real-time neural character rendering with high photo-realisitc quality. For example, Ouyang et al. [a] proposed to use multi-plane image (MPI), which is also a light field representation, to render an animatable character in real-time. Ouyang et al. also applied perceptual supervision on full images to learn high-quality appearance details. Therefore, I think this submission should cite, discuss, and perhaps compare with this highly relevant work [a].
[a] Ouyang et al. Real-Time Neural Character Rendering with Pose-Guided Multiplane Images. ECCV 2022.
4. In Line 199-200, the authors mention that the blurry results trained with L1 loss is due to the one-to-many mapping problem. However, their solution, i.e., applying perceptual supervision, does not resolve this problem; in other words, the one-to-many problem still exists after applying the perceptual loss. I guess this is the main reason for the jittering texture in the supplemental video (00:14-00:29).
5. The novel pose sequences for the long skirt case (supplemental video 00:56-00:59) are too short.
Typos: Line 278: In Fig.5, **We** --> In Fig.5, **we**
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discuss the limitations and potential societal impact in the supplemental document.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback to further improve our work. Please see our visual and quantitative ablations in the global response-rebutal.pdf.
* * *
**Bounded to mesh**
We conducted an experiment where we use the SMPL model as the deformable mesh model (see rebuttal.pdf-Fig.4). One can see that our approach even for this very coarse model still outperforms the baselines proving the flexibility of our representation while a better template mesh further improves the results. We will include this experiment in the camera ready version.
* * *
**Details about the mesh model**
We used the off-the-shelf DDC [1] as our deformable surface model as mentioned in Section 3.1. This model takes a (novel) skeletal motion as input and generates the posed and non-rigidly deformed template while being supervised solely on multi-view imagery. We will add more details about DDC in the final version.
* * *
**Claim with respect to Ouyang et al.**
Although Quyang et al. achieve impressive and photorealistic results, they only allow minimal pose changes and mostly overfit to the specific recording as also stated in their limitations. In contrast, our method can handle arbitrary poses at test time and, thus, is a controllable character representation. Moreover, they mostly show frontward facing camera movements, while our method supports full 360 camera rotations around the human subject. We will adjust the claims while considering Quyang et al. and discuss the differences between Quyang et al. and ours in the revision.
* * *
**Solving the one-to-many mapping**
Like most works we do not explicitly model these stochastic effects in clothing, but we found that the perceptual loss effectively addresses them (see main paper-Fig.5 c,d) while some small residual might still remain. Also, since we can render the entire image thanks to the fast rendering speed, generative models (e.g., GAN, VAE) could be incorporated to model this stochastic process. We leave this as future work.
* * *
**Sequence too short**
We will increase the length of the sequence.
[1] Habermann et al. “Real-time deep dynamic characters.” In TOG 2021.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their reply to my questions. My major concerns are addressed in the rebuttal. After reading the rebuttal and other reviews, I would like to keep my original positive position. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and effort to review our work and to further improve it. We address individual concerns in the reviewer-specific rebuttal dialogue and provide a pdf (rebuttal.pdf) comprising additional visualizations and experiments.
Pdf: /pdf/df0dc0a31372eb80f5345ea5fb7535d993cedada.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a real-time approach for generating controllable and photorealistic digital human avatars. The main contribution of the paper is to learn the texture of the avatar using a light-field representation. In contrast to other continuous representations (e.g., NeRF), the proposed method can predict the color of a ray using a single MLP call. This allows faster rendering of the images which in turn allows using perceptual losses during training and yields significantly better synthesis results. The paper mainly extends the Deep Dynamic Characters (DDC) method and replaces its TextureNet with the proposed light-field representation. The experiments are performed on the DynaCap dataset where the proposed method is shown to outperform other real-time methods while being on par with the non-realtime methods.
Strengths: - The use of the light slab method for learning the texture details of the avatar is novel and makes sense. I believe this has the potential to be used in other methods as well where faster rendering speed is required.
- Since the color for each ray can be predicted in one shot, the proposed method allows rendering full images during training. Hence, perceptual losses can be used and are shown to help significantly (LPIPS: 18.51 vs 12.85)
- The proposed method achieves results on par with the state-of-the-art while being 100x better.
- The paper is well-written and easy to read, though some typos need to be fixed
Weaknesses: ### Novelty
- My main concern with the paper is its limited novelty. The paper builds on DDC and mostly replaces its TextureNet with the proposed light-field-based MLPs. It is unclear whether better results are due to better engineering or the proposed light-field-based approach. Is the implementation of DDC and the proposed method exactly the same with texture nets being the only difference between the two?
### Missing Experiments
- I would have liked to see an experiment in which light-field is replaced with vanilla NeRF while having everything else exactly the same.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: ### Checkerboard artifacts:
- What are those checkerboard artifacts in Fig. 4? With such artifacts it is very easy to identify that the images are computer generated, hence it raises a question regarding paper's claim of `photorealistic` synthesis. Perhaps tone down the photorealistc claim if these artifacts are not preventable.
### Experiments
- The best setting of the proposed method is with $L_{perc}$, I wonder why the ablation studies are performed without this loss? What happens if we use $L_{perc}$ with `Single surface + viewing direction` setting?
### Overview figure
Why the red dot is at a different location in the feature maps? Shouldn't it be consistent with the temporal normal maps?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations of the paper are not highlighted in the paper. The authors should add the during the rebuttal.
Potential limitations are:
1. Requirement of an initial 3D scan of the person as required by DDC, which is not easy to obtain without having access to sophisticated hardware.
2. Long training sequences.
3. Use in deep fakes.
4. Privacy concerns, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback to further improve our work. Please see our visual and quantitative ablations in the global response-rebutal.pdf.
* * *
**Novelty**
We would like to emphasize that this work is about finding an appearance representation that is, both, efficient and supports high quality. To this end, we present the deformable two-surface light field parameterization. We assume a deformable mesh model is given, this can be either DDC or as our ablation shows also a coarser deformable mesh or even SMPL mesh would work reasonably well (see rebuttal.pdf-Fig.2,4). Originally, we reported the numbers provided by the authors of DDC. For the rebuttal, we trained DDC’s TexNet with our exact checkpoint for the geometry networks (EGNet and DeltaNet) to alleviate any other influence. We found that our geometry checkpoint has even a lower performance than the original one. Note that our representation clearly outperforms DDC’s results (see below). Moreover, our ablation ‘single surface + viewing direction’ (main paper Fig.5-b,Tab.2-b) can be considered as a variant of DDC further confirming the superiority of our appearance representation.
| Novel view | PSNR | LPIPS | FID |
|-----------------------------|-----------|-----------|----------|
| DDC (paper) | 32.96 | 20.07 | 27.73 |
| DDC (same geometry network) | 29.50 | 32.66 | 44.18 |
| **Ours** | **33.30** | **12.85** | **8.69** |
| Novel pose | PSNR | LPIPS | FID |
|-----------------------------|-----------|-----------|----------|
| DDC (paper) | **28.05** | 30.43 | 38.37 |
| DDC (same geometry network) | 26.59 | 43.43 | 61.92 |
| **Ours** | 27.95 | **26.59** | **26.16** |
* * *
**NeRF ablation**
This might be challenging as rendering the 1K image with volumetric rendering takes very long, e.g., Neural Actor and HDHuman take 4~5 seconds, which makes the perceptual supervision on the entire image inefficient.
* * *
**Checkerboard artifacts**
This is not a checkerboard artifact. The D2 subject has a stripe pattern on the T-shirt if closed up (see GT of supplementary document-Fig.1 and rebuttal.pdf-Fig.3) and our approach learned to reproduce it. This actually shows our ability to capture very fine details.
* * *
**Why ablation study without L_perc? / Additional Ablation “single surface + viewing direction + perceptual loss”**
We conducted this ablation to demonstrate that the proposed two-surface representation in isolation with the perceptual supervision achieves improved visual accuracy, which is confirmed by our experiment. Adding the perceptual supervision further improves visual quality.
For the rebuttal, we further added ablations with perceptual supervision as requested by the reviewer (see rebuttal.pdf-Fig.1,2,Tab.1). The variant with “single surface + viewing direction + perceptual loss” is shown in rebuttal.pdf-Fig.1-e and Tab.1-h.
In summary, the combination of our proposed two-surface representation and the perceptual supervision achieves the best result.
* * *
**Limitations**
We will include the suggested limitations in the revision. Regarding the first limitation (requirement of an initial 3D scan which is not easy to obtain), we would like to note that this assumption is not too restrictive as competing methods (we compare to) rely on a multi-view sequence. A template from multi-view imagery could be acquired by methods like multi-view PIFu.
Regarding the second limitation (long training sequences), this is rather a pro than a con. Most methods are not even able to train on such long sequences since their representation fails to optimize parameters that can cover such diverse poses and as a consequence produce very blurry results (see main paper-Fig.4 NB, A-NeRF results).
Concerning deep fake generation and privacy concerns, we will add such a discussion.
Moreover, we would like to highlight that we discuss limitations in the supplemental document. However, for the final version we will incorporate this paragraph into the main manuscript.
***
**Overview figure**
The location of the red dots should be consistent with the temporal normal maps. We will modify this in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing additional explanations. All my concerns have been addressed. I maintain my original rating. | null | null | null | null | null | null |
Similarity-based cooperative equilibrium | Accept (poster) | Summary: The paper proposes similarity-based cooperative equilibrium, which extends program equilibrium to a setting of partial transparency. This modification is more practical (as full transparency is much less realistic and is hard to work with) and has useful theoretical properties (e.g., the folk theorem states that by choosing an appropriate similarity measure, one can implement the same cooperative equilibria as if given full transparency). A simple (deep) RL algorithm is proposed to find the proposed equilibrium, which is tested in a newly proposed high-dimensional variant of PD.
Strengths: - The paper builds on existing concepts like program equilibrium to propose a logical extension that is more practical and relevant for AI.
- The paper presents a strong theory followed by limited but representative experiments.
- The paper is well-written and well-structured. I appreciate the examples in the introduction and the explanations throughout the paper.
- HDPD is a cool independent contribution.
- Experiments with a relevant baseline, LOLA, are provided.
Weaknesses: The only potential reason to not accept this paper is if other reviewers find problems with the theory (e.g., proofs), as I have only glanced over it. Otherwise, I do not see reasons to not accept this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - There is some very recent work about cooperative equilibria + MARL that could be discussed/mentioned in the camera ready. https://dl.acm.org/doi/10.5555/3545946.3598670 https://arxiv.org/abs/2305.06807 https://dl.acm.org/doi/10.5555/3545946.3598618
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: - The theoretical analysis is restricted to a certain class of diff-based policies (lines 161-163).
- As mentioned in section 6.4, the proposed empirical method may have limited applicability in more complex games.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ty7R for their efforts in evaluating our manuscript! We are glad that the reviewer found the paper so interesting.
>There is some very recent work about cooperative equilibria + MARL that could be discussed/mentioned in the camera ready. https://dl.acm.org/doi/10.5555/3545946.3598670 https://arxiv.org/abs/2305.06807 https://dl.acm.org/doi/10.5555/3545946.3598618
We will make sure to include the referenced articles in an updated version of our manuscript. These works show how mediators and contracts can help achieve cooperative equilibrium in an ML context. We currently only reference more theoretical papers on these topics (e.g., Monderer and Tennenholtz 2009). These papers will therefore be a valuable addition to our manuscript. Thanks for bringing these works to our attention!
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: My pleasure! | Summary: The paper presents 2-player “difference meta games,” which augment standard one-shot games with similarity information about the other player. The paper shows that such games have Pareto optimal cooperative equilibria that cause (seemingly naive) “ML algorithms” to cooperate rather than defect. Results are established primarily theoretically. Additionally, the CCDR algorithm is presented that cooperates in self play in a high-dimensional one-shot prisoner’s dilemma (formulated as a difference meta game).
Strengths: - The paper presents an interesting: difference meta games. These games are simpler than program meta games, and are in important ways more realistic.
- The paper has a high-level or rigor and is, for the most part, easy to read and understand.
Weaknesses: - While difference meta games overcome the issue of full transparency, coming up with a reliable similarity metric (that can’t be duped) seems unrealistic. At the very least, there seems to be very limited situations in which a difference meta game could be used.
- The CCDR algorithm seems to rely on the idea of their being “cooperate” and “defect” actions. This seems problematic to me. First, for an arbitrary game, I didn’t see in the paper where the high-level actions of “cooperate” and “defect” were defined. Thus, it seems unclear what CCDR actually does. (Perhaps I missed something obvious, for which I apologize). Second, It seems that many games would not have such high-level actions (e.g., chicken or battle of the sexes). From my perspective, this further limitation seems rather confounding.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Practically speaking, how often in the world would an AI agent actually encounter a scenario in which a different meta game could be used?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper points out some limitations as “future work,” including that the best response can perform poorly against random agents and that the CCDR approach However, it seems there are many more issues that could be made more explicit in the conclusions of the paper. First, the CCDR algorithm appears to rely on the fact that there are “cooperative” and “defect” higher-level actions. However, many games don’t have such actions. Second, the approach is also limited to two players.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer xnVF for their thoughtful comments!
We address the question about realism and whether the similarity metric can be duped in the overall response, because Reviewer EVhh asks a similar question.
>the CCDR algorithm appears to rely on the fact that there are “cooperative” and “defect” higher-level actions. However, many games don’t have such actions.
We believe this is a misunderstanding. CCDR does not assume that there is an action labeled "Cooperate" or "Defect". As described in Sect. 6.1, the core idea of CCDR is, roughly, to minimize the following loss function in pretraining: the loss obtained against an exact copy plus the loss against a randomly generated opponent. To maximize the first summand one generally has to play the highest-payoff symmetric strategy profile in the base game when observing a diff value of 0. In the Prisoner’s Dilemma, the best symmetric strategy profile is (Cooperate, Cooperate). So, that’s where the CC (“Cooperate against Copies”) comes from. However, in other games this CC summand of the CCDR loss is still well defined and maximized by what we would consider to be cooperative strategy profiles. For example, in the Game of Chicken, one would learn to play Swerve against a copy (depending on the specifics of the payoffs) and in Stag Hunt one would learn to play Stag against a copy. To minimize the second summand of the CCDR loss, one generally has to learn something like the following: If I observe that the opponent is very different from me (as different as a randomly generated opponent), then play a strategy for the base game that is good against a randomly generated opponent strategy. In the Prisoner’s Dilemma, this specifically means learning to defect upon observing that the opponent is very different (as different as a randomly generated opponent). That’s where the DR (“Defect against Random”) comes from. DR is also well defined in other games. However, compared to CC, it’s less clear what DR implies in other games, because it depends on the distribution of opponents generated. That said, we might imagine that in Stag Hunt, minimizing DR loss would imply playing Hare at high diff values due to [risk dominance](https://en.wikipedia.org/wiki/Risk_dominance).
We will make sure to make it clear in the paper that CCDR doesn’t require that we have actions labeled “cooperate” or “defect”.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications and arguments. I remain dubious about the generality and practicality of the approach. | Summary: Training AI agents that will reliably cooperate, both with humans and with other AI, is one of the principal goals of AI alignment research, and the prisoner's dilemma (PD) is simple game that has long been used to study cooperation. PD is interesting because it has only one Nash equilibrium: to defect (i.e. not cooperate), which is a somewhat paradoxical result because mutual cooperation would be better for all participants. The usual modification to PD is the iterated prisoner's dilemma, in which each agent can observe a history of past interactions. This history opens up new equilibria, and allows agents to learn to cooperate.
This paper proposes another potential modification to PD with cooperative equilibria, namely supplying each agent with a measure of the similarity between its policy, and the policy of other agents. Agents can learn to cooperate with similar agents, and defect against dissimilar ones. The paper proves some basic theorems about equilibria in such games.
The paper also proposes a variant -- high-dimensional PD (HDPD), which has a more complex notion of "cooperate" and "defect" than simple PD. Naive training of an agent in HDPD does not result in cooperation. However, the authors introduce a pre-training mechanism called Cooperate against Copies and Defect against Random (CCDR), in which agents alternate self-play (i.e. play against perfectly similar copies) and play against random opponents. When using this pre-training mechanism, agents learn to cooperate.
Strengths: Although it is not mentioned by the authors in the paper, I would like to note that "perceived similarity" does, in fact, appear frequently in the natural world. Kin selection (genetic similarity) is the basis for cooperation among social insects like ants and bees, and shared language, culture, and religion have historically been the basis for cooperation among humans. IMO, this is thus a potentially important avenue of research, which has been overlooked in the literature.
The paper is well written, and the results seem solid. The authors have an excellent discussion of the limitations of CCDR -- in particular, after learning to cooperate during pre-training, some agents partially unlearn it during the rest of training. Further exploration in this area could have potential applications on AI alignment research.
Weaknesses:
My main concern is this is mostly a paper on game theory, so I question whether NeurIPS is really the right venue for it. The actual experiment also seems somewhat simplistic to me. I believe that the primary application of this work relates to AI alignment, so I would have preferred to see more discussion about how these ideas could be applied to the real world.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors:
From an AI alignment perspective, can a meaningful measure of "similarity" can be derived for more complex agents, operating in more complex environments? In more complicated cases, the only reliable information that an agent can observe about other agents is the history of past actions, in which case the notion of "similarity" becomes much the same as the iterated PD, or other reputation-based systems.
Is it possible for agents to game the system, and hack the similarity function? This is probably relevant only for more complex agents than those considered in this paper, but is extremely import wrt. to AI alignment.
Is it possible to derive a notion of similarity that would enable otherwise dissimilar agents to cooperate? E.g. among humans, people with very different goals and backgrounds can cooperate if both parties agree to follow the same "moral code", or "rule of law".
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations:
Although this paper has potential applications wrt. to AI alignment, the authors focus primarily on detailed mathematical analysis, and fail to discuss the larger social implications.
However, I applaud the authors for their frank and open discussion of the limitations and failures of CCDR. IMO, that discussion is one of the most valuable parts of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank EVhh for this insightful review! We address the point about realistic settings and gaming the similarity metric in the general response.
>My main concern is this is mostly a paper on game theory, so I question whether NeurIPS is really the right venue for it.
First, we believe our paper is a valuable and relevant contribution to the NeurIPS community since the diff game setup is specifically motivated by ML agents. We also introduce a new type of prisoner’s dilemma environment for experimental work, a simple pre-training method and experimental results, including a comparison to a canonical baseline from the multi-agent learning literature. Even if the method and experimental results are limited in scope, we believe this work will serve as a foundation for future work on the important topic of cooperation in multi-agent machine learning.
Second, it has long been the case that NeurIPS accepts a significant number of game theory papers, sometimes even without any strong learning component. [Some examples from previous years: https://proceedings.neurips.cc/paper_files/paper/2022/file/aa5f5e6eb6f613ec412f1d948dfa21a5-Paper-Conference.pdf ; https://proceedings.neurips.cc/paper_files/paper/2022/file/9d823334fdccb62a544fa7643cf0615d-Paper-Conference.pdf ; https://proceedings.neurips.cc/paper/2018/file/a9a1d5317a33ae8cef33961c34144f84-Paper.pdf ; https://proceedings.neurips.cc/paper_files/paper/2021/file/09a5e2a11bea20817477e0b1dfe2cc21-Paper.pdf ; https://proceedings.neurips.cc/paper_files/paper/2022/file/cfce833814505906445f8df2f65ab548-Paper-Conference.pdf ] This shows that contributions in this area are frequently evaluated as relevant to the NeurIPS community. In addition, the [NeurIPS 2023 call for papers](https://neurips.cc/Conferences/2023/CallForPapers) explicitly includes topics such as “algorithmic game theory” and “economic aspects of machine learning”. Of course, which topics/areas are relevant to NeurIPS is not our decision to make, but for the purposes of fairness and of accepting the best papers within each area, we believe this decision should be made consistently across the conference, not ad hoc on a paper by paper basis.
>Is it possible to derive a notion of similarity that would enable otherwise dissimilar agents to cooperate? E.g. among humans, people with very different goals and backgrounds can cooperate if both parties agree to follow the same "moral code", or "rule of law".
This is an important question. We believe the answer is yes, both in theory and practice! As the reviewer hints, the key is to use a notion of similarity that compares only cooperativeness or “moral code” and related concepts, and ignores aspects that are irrelevant to SBC. Theoretically it’s not too difficult to show the existence of such cooperative equilibria in some simple settings. Here is a simplistic example: Imagine that the two agents play a Prisoner’s Dilemma against each other, but also engage in some other activity that does not affect the other player’s payoff. Imagine further that the diff function only compares the Prisoner’s Dilemma parts of the agent’s policy, i.e., the functions from diff to probability of cooperation. Then all results from our paper can be directly applied to show that cooperative equilibria exist. In any case, our folk theorem implies the existence of equilibria between agents that behave very differently, though there might not be any equilibria with natural diff functions.
We believe that this SBC between otherwise dissimilar agents can also work in practice, because, again, for SBC only the similarity of a very specific aspect of the agents and their policies matters. That said, one should expect some difficulties in the real world, because it is often not clear whether a particular signal of being different matters or not. Further experimental work is needed to evaluate how much is lost to these difficulties.
We will update our manuscript to make it clearer that only a specific kind of similarity is important. We will also add a note on this in the “Conclusion and future work” section.
With regard to AI alignment, we will elaborate on the relevance of this work in the camera ready copy. We think our work is important for two reasons. First, we want future models acting as representatives of different human actors to be able to reach and enforce mutually beneficial, cooperative outcomes and avoid conflict (see the recent agenda papers by Dafoe et al. (https://arxiv.org/abs/2012.08630) and Conitzer et al. (https://doi.org/10.1609/aaai.v37i13.26791)). SBC is a particular mechanism by which models could cooperate in equilibrium. Another aspect not mentioned in our original submission (but mentioned in footnote 1 of the Conitzer paper) is that cooperation between AIs can also be undesirable in some circumstances. For example, some AI alignment approaches, such as debate or automated interpretability, depend on checks and balances between different AI systems. Similarly, many real-world economic mechanisms such as auctions break if participants are able to collude. While most of our work is motivated by enabling SBC in contexts where cooperation is desirable, better understanding SBC and the conditions under which it can arise will also help us guard against such unintended collusion between models.
---
Rebuttal Comment 1.1:
Title: The devil is in the details...
Comment:
Thank you for the response. When defining a diff function, I tend to think that "the devil is in the details." Certainly in human interactions, agents who profess to follow a moral code, but then violate that code when they think they can get away with it, is a very common occurrence. :-)
However, basic theoretical work which shows that equilibria exist in simple settings is a first step towards solving that problem.
Future work might also focus on how robust those equilibria are. For example, if the diff function is not entirely reliable, i.e. reflecting a judgment that "the other agent looks honest, but I can't be sure", do the equilibria still exist? How much noise or uncertainty can they tolerate before things fall apart?
---
Reply to Comment 1.1.1:
Title: (Optional) further thoughts on more realistic theoretical models and experiments
Comment: Thanks for this response! We agree with the reviewer’s comments. In particular, we agree that this is an important question and that future work should tackle it. We give some further thoughts on settings in which to investigate these things below. The reviewer and AC shouldn’t feel obligated to read these thoughts.
It seems that an important question is whether _realistic_ ways of perceiving similarity are sufficient for establishing cooperative equilibrium. Both our theoretical results and our empirical results are about artificial ways of perceiving similarity. That being said, the reviewer's comment inspired us to consider the following slight extension of our model:
Consider the Prisoner’s Dilemma, parameterized by G, as per Table 1 of our paper. Imagine that each player is only allowed to submit a threshold agent $(C,\theta_i,D)$. Now imagine that each player $i$’s diff function works as follows:
With probability $p_i$: $\mathrm{diff}_i(\theta_1,\theta_2)=|\theta_1-\theta_2|+N$ with $N\sim \mathrm{Unif}([0,\epsilon])$ for some $\epsilon>0$.
But with probability $1-p_i$: $\mathrm{diff}_i(\theta_1,\theta_2)=0$.
(For $p_1,p_2=1$ this is exactly the setting in Example 1 and Proposition 1 of the paper.)
Intuitively, this is supposed to model the case where each player can try to “manipulate” the diff value to be 0 and the manipulation succeeds with probability $1-p_i$. It is furthermore assumed (somewhat unrealistically) that if manipulation fails, the other player never learns of the attempt to manipulate. Instead, the diff value is observed normally if manipulation fails. That way we can assume that each player always attempts to manipulate.
Then the following generalization of Proposition 1 holds:
In the aforedescribed game, $((C,\theta_1,D),(C,\theta_2,D))$ is an equilibrium if and only if
* $\theta_1,\theta_2 \leq 0$, or
* $0\leq \theta_1=\theta_2 \leq \epsilon$ and $Gp_i\geq 2$ for $i=1,2$.
The proof works just the same as the proof of the original Proposition 1. Intuitively, the idea (for the second part) is that if Player $i$ decreases $\theta_i$, their own probability of cooperation decreases by $2\delta/\epsilon$. Meanwhile, the opponent’s probability of cooperation decreases by $p_{-i} \delta/\epsilon$. Thus, the overall increase in Player $i$’s utility is $ 2\delta/\epsilon - G p_{-i} \delta/\epsilon$. For this to be nonnegative, we need to have $G*p_i\geq 2$.
In terms of more realism, here is an ambitious setting that it would be exciting to study in future work. Say we have two complex base models. For concreteness, let’s say these are two potentially very different LLMs. Now, we get to train the LLMs to do SBC with each other. Call the resulting models the SBC base models. Now two players (principals) Alice and Bob play a game in which each chooses a fine-tuning scheme for one of the SBC base models. The LLMs then play a game against each other on the principals’ behalf. Let’s imagine that this game is a social dilemma. But let’s also imagine that this game has some player-specific aspects. In particular, imagine that Alice and Bob need to do some fine-tuning to achieve high utility and that this necessary fine-tuning is different between the two players. Finally, as our diff, we imagine that the two models make some observation about how they were fine-tuned. For example, we might imagine that they directly observe each other’s fine-tuning data sets. (At least for practical purposes this is still very far from full transparency.) But we might also imagine that they observe some abstract description of these data sets, such as a description of what kind of data they contain. For example, we might imagine descriptions such as, “The opponent model was trained on analyses of 18th century landscape paintings; it wasn’t trained on any strategic data” or “Alice’s and Bob’s fine-tuning data seems to have a very similar effect on strategic behavior”. We might use another language model to write a description of how the fine-tuning data sets relate.
Now the challenge would be: Can we train the SBC base models in such a way that the game between Alice and Bob has cooperative (approximate) equilibria? We conjecture that the answer is yes. However, depending on various details of the setting, this is not at all obvious. For example, it may be possible for Alice or Bob to deploy “deceptive training data”, i.e., training data that looks innocuous but somehow still affects the strategic behavior.
While this setting is still artificial (like most work on safety in social dilemmas between autonomous agents), we think it could elucidate the kind of worries shared by EVhh and the authors in a way that our numeric diff model cannot. Unfortunately, it is also vastly more complicated than training models with ~10k parameters and a handful of inputs and outputs. Nonetheless, we prospectively would like to investigate this kind of setting. | Summary: This paper considers the problem of two agents trained by machine learning who interact in a social dilemma. The agents are able to observe a numeric measure of the similarity of their learned policies, and condition on this similarity when choosing their actions (in contrast to the "full transparency" case, in which the agents can each observe the other's full source code). When policies can condition on similarity, cooperation can be supported in equilibrium (similarly to various full transparency results in the literature).
When partially-transparent policies are naively trained, they tend not to learn to play toward cooperative equilibria; however, the paper demonstrates that pre-training by averaging between playing against the same policy and a randomly-sampled policy can enable machine learned models to find partially cooperative equilibria.
Strengths: This paper correctly observes that partial transparency supports the same equilibrium outcomes as full transparency, while being dramatically more plausible. The results are carefully derived and clearly presented. The paper studies a possible approach to solving an important problem; social dilemmas succinctly describe a fundamental problem in many situation where agents would all gain by cooperating.
This paper takes a step beyond proving the existence of equilibria, and demonstrates that plausible training procedures can actually find them.
The notion of similarity is based on the similarity of the policies considered as functions, rather than being computed on weights or some other parametric representation. This is very natural, and I think it's an improvement over the "source code" based approach that earlier full-transparency literature tends to appeal to.
Weaknesses: My main concern is that the results really only apply to a very specific class of games (player-symmetric additively decomposable games). It makes sense that the equilibrium uniqueness results of theorem 4 should be narrowly targeted (it's hard to prove things about large classes of games), but I would have liked to see some experimental examination of whether the proposed approach works well on games that don't satisfy these restrictions.
The procedure described still tends not to find fully cooperative equilibria, which seems like a shortcoming.
#### Minor issues (did not affect rating)
- p.1: "While partial transparency is the norm": This is a pretty strong claim, and one which is important for the main claims of the paper. Could you back it up with a citation or example or something else beyond a bare assertion?
- p.2: "In particular, full transparency can make the problem of equilibrium selection harder": Isn't this also true of partial transparency?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Have you evaluated your approach on non-additively-decomposable games?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 2ynN for their detailed review!
>My main concern is that the results really only apply to a very specific class of games (player-symmetric additively decomposable games). It makes sense that the equilibrium uniqueness results of theorem 4 should be narrowly targeted (it's hard to prove things about large classes of games), but I would have liked to see some experimental examination of whether the proposed approach works well on games that don't satisfy these restrictions.
>
>Have you evaluated your approach on non-additively-decomposable games?
Regarding further experiments on games that break our restrictions: We agree that it is important to study more types of games. We have run some informal experiments on games that are only approximately additively decomposable and only approximately symmetric (while using diff functions that ignore these slight asymmetries). Generally the results in these settings are – unsurprisingly – similar to the current setting. We are also currently working on a second project about similarity-based cooperation that studies more radically different games (such as Chicken, repeated games) and explores more different methods (mostly other methods from the opponent shaping family). We have decided to leave further experiments on other types of games to a different paper for the following reasons.
Most importantly and mundanely, any new experiments would require extra space to describe, especially if new conceptual issues need to be addressed. For example, for an experiment with non-additively decomposable games, we would have to first define such a game a la HDPD (assuming we want the experiments to be comparable), which is non-trivial. We would also need to somehow address the difficulties of using CCDR in non-additively-decomposable games. (It may work very well without any specific modifications, e.g., in Chicken, but we would have to explain why it works.) Using the Prisoner’s Dilemma as a running example throughout the paper provides intuition for the more complicated HDPD. If we use a substantially new game – say, Chicken – we would have to first build up intuition for it. (What do we even expect the SBC equilibria to look like? Etc.) We don’t think we have the necessary space to spare to do all this – one could argue that we already rely too much on discussing details and results in the appendix that should be discussed in the main text (e.g., the LOLA results).
We also think that many such further experiments are a more natural fit for future projects than they are for the present one. Generally, the main research question behind the experiments in the present paper is whether SBC equilibria in complex settings (as opposed to the very simple settings studied by theory) can be found with machine learning. It’s not clear a second example like a high-dimensional version of Chicken would add much to answering this question (at least if the complexities involved are very similar to those introduced by the HDPD). Secondarily, because the HDPD is closely related to our theoretical settings, our experimental results test whether our theoretical results apply in a more complex setting. It seems more natural to study other game-theoretic dynamics such as Chicken in contexts where one can ask more concrete questions. For example, if one could get opponent shaping to work in diff games, one could ask whether opponent shaping methods learn (non-diff-based) asymmetric Dare–Swerve equilibria or diff-based Swerve–Swerve equilibria.
>p.1: "While partial transparency is the norm": This is a pretty strong claim, and one which is important for the main claims of the paper. Could you back it up with a citation or example or something else beyond a bare assertion?
We will clarify the claim mentioned on p.1 – thanks for pointing it out! We here merely mean that it is common to have some non-trivial information about the other player other than that they are a rational agent with a particular utility function and particular beliefs. We do not yet want to make a claim at this point about how commonly this information induces new, better equilibria.
>p.2: "In particular, full transparency can make the problem of equilibrium selection harder": Isn't this also true of partial transparency?
Regarding the claim on p. 2: Yes, that’s correct and we’ll need to clarify this as well in the paper. What we meant to refer to here is that full transparency makes the equilibrium selection problem very difficult; diff games seem to avoid this under some assumptions. Our paper does not say anything about other kinds of partial transparency and what they do to the equilibrium selection problem. All of this is unclear at this point in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications! | Rebuttal 1:
Rebuttal: We thank the reviewers for their efforts in evaluating and helping us improve our paper! We are pleased that they found our paper “well written” (EVhh, ty7R), “easy to read and understand” (xnVF) and the results “clearly presented” (2ynN). It’s encouraging that the reviewers find our approach “interesting” (xnVF), “natural” (2ynN), “logical” (ty7R) as well as “more practical and relevant for AI” (ty7R) and “dramatically more plausible” (2ynN) than prior work on program equilibrium. We are also glad that the reviewers find our results “carefully derived” (2ynN), “solid” (EVhh) and “rigor[ous]” (xnVF), and that they appreciate our “excellent discussion of the limitations of CCDR” (EVhh) and the HPDP as “a cool independent contribution” (ty7R).
We will post replies to the individual reviews. We here address a question brought up by both xnVG and EVhh. Roughly, both xnVG and EVhh would have liked more discussion of realistic settings for SBC. They are also both concerned that the similarity metric could be gamed.
We agree that these questions are important. We believe that SBC has future applications to high-stakes interactions between AI systems, but recognize that there are many open questions about SBC. First, depending on how hard it is to find SBC equilibria, SBC may or may not play an important role. Our paper makes progress on this question and we hope future work will further resolve uncertainty. Second, we don’t know how future deployments of AI will go. For example, SBC is in part motivated by the belief that in future deployment scenarios, AI models will often strategically interact with near copies of each other (see more below). Perhaps future deployment will work differently. Due to limited space we have discussed these issues only briefly in our submission (e.g., lines 53–56), but we will elaborate here and in the camera ready copy as space permits.
First, we agree that gaming of the diff function is a concern. In fact, we think there are two somewhat different concerns.
1. As in program equilibrium, the policies in our setup are essentially commitment mechanisms that are made credible by the diff function. For example, let’s say that Alice submits a “Cooperate against copies; defect against non-copies” policy and diff reveals whether Alice’s and Bob’s policies are copies of each other. Then if Bob submits the same policy, he effectively credibly commits to that policy. Because we can view the policies and the diff observation as credible commitments, many of the usual points about commitment apply: Players would want to pretend to make credible commitments and try to renege on commitments if possible. For instance, in the above example, once the signal “Alice’s and Bob’s policies are copies” has been sent to Alice’s policy, Bob will want to intervene to defect, rather than follow his submitted policy. If this is possible, the cooperative equilibrium disappears.
2. Under a given diff function, it may be unclear whether a particular policy profile (\pi_1,\pi_2) is a cooperative equilibrium or not, even if the diff function is faithfully observed. For example, Player 1 might not know whether \pi_1 is exploitable.
We now argue that nonetheless SBC is applicable in the real world. (Some of the ideas below resemble ideas from discussions of other forms of credible commitment.) Imagine that A and B each delegate their choice in a complicated strategic scenario to an AI system. To do so, they both provide some information on the scenario they face and on their goals and the system implements or recommends some strategy to achieve these goals. Now imagine specifically that A and B both licensed their system from the same company and then fine-tuned it for their specific application area. Finally, imagine that the chatbots obtain this information (not necessarily from A and B). We think that SBC could apply in this type of scenario. That is, A’s and B’s systems can partially cooperate in this setting depending on how much domain-specific fine tuning occurs. For this we need some important but, we believe, often realistic assumptions:
- We suppose that A and B are unable to make a competent choice themselves (e.g., because the scenario is too complex, or because the AI system controls actuators that Alice and Bob cannot control themselves). So they cannot “intervene” to break SBC.
- We must also imagine that Alice and Bob cannot delegate to another AI system, at least without risking that the other learns of this switch. For example, we might imagine that they cannot afford to license multiple systems.
- We imagine that Alice and Bob cannot secretly modify their system to defect. For example, we may imagine that the AI company runs the system on their own servers and restricts the ways in which licensees can use or modify the system. Specifically, it may restrict deceptive applications including the ones needed to break SBC. Or we could again imagine that modifying the system’s strategic behavior would require a large effort that is hard to reliably keep secret.
One could also consider settings in which diff observation and other forms of partial transparency are set up deliberately, including to allow for cooperative equilibria. Cryptographic tools (computation on the blockchain, zero-knowledge proofs, etc.) provide interesting means to achieve this. More straightforwardly, a company or government entity could provide certifications of AI systems. For example, credibility in the above scenario could be achieved if the certification company ensures that A and B have not modified the strategic behavior of their AI systems w.r.t. SBC.
The above primarily addresses the first worry. They address the second worry insofar as it is intuitive that the examples of diff functions allow for equilibria. That said, it seems difficult to say for a given non-trivial, imprecise description of how similarity is observed whether the second type of gaming is possible or not. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
PCF-GAN: generating sequential data via the characteristic function of measures on the path space | Accept (poster) | Summary: This paper proposes PCF-GAN, which aims to improve the effectiveness of the discriminator in differentiating the time series distributions by utilizing the path characteristic function (PCF) as a principled representation of the time series distribution within the discriminator. The authors also give the theoretical foundation of the PCF distance, proving its properties that ensure stable and feasible training of PCF-GAN. Extensive experiments show the effectiveness of the proposed method.
Strengths: The newly proposed metric in the discriminator for handling sequential data is interesting. Also, the author provides theoretical evidence for the analytical properties of the proposed loss metric.
Weaknesses: - The presentation of this work is kind of confusing, making it difficult to fully comprehend the content. Specifically, in Figure 3, it is unclear what the generator loss represents if it is computed on the embedding vector. Additionally, it is unclear why the generator and regularization loss can enhance the discriminative power in Equation 10. Clarifying these help audience better understand.
- The related work section consists of only one paragraph, which may overlook some relevant research in the field.
- The ablation study of every component is required to demonstrate the effectiveness individually.
- Regarding the sequential data, I am wondering how does this method work well on more challenging domains like video data since videos are also sequential in nature.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see Weaknesses
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address all the questions in detail as follows.
>The presentation of this work is kind of confusing, making it difficult to fully comprehend the content. Specifically, in Figure 3, it is unclear what the generator loss represents if it is computed on the embedding vector. Additionally, it is unclear why the generator and regularization loss can enhance the discriminative power in Equation 10. Clarifying these help audience better understand.
We extend our gratitude to the reviewer for constructive suggestions on the presentation of our work, based on which careful clarifications will be made in the revised manuscript. We summarise below the key points.
(1) When the embedding map $F$ is injective, it is proved that $F(X) \neq F(Y)$ if and only if $X \neq Y$ (both $\neq$ understood in terms of distributions). Thus the generator loss, defined as the EPCFD between the embedding vectors, can be regarded as a distance function on the original time series distributions via the injective embedding map, hence preserving the discriminative power.
(2) The regularization loss, defined as the EPCFD between the noise and the embedding of true time series, is proposed to enforce the injectivity of the embedding map $F_{\theta_f}$. When the embedding map and the generator $G_{\theta_g}$ are pseudo-inverses of each other and true & fake distributions coincide, the regularization loss attains zero. This justifies the use of regularization loss in Equation (8).
(3) The improved discriminative power is manifested via the increased distance score between any pair of given distributions. This is ensured by maximizing the EPCFD used in generative loss and regularization loss through optimizing the model parameters $\theta_M$ and $\theta_{M}'$, respectively (see equation (10)). A close analogy in the literature is the optimization of model parameters of the test function (critic) by maximizing the Wasserstein distance in WGANs.
(4) In addition, numerical evidence has been provided in Section B.4.2. Figure 6 demonstrates that the optimized $\theta_M$ increases the discriminative power of EPCFD on the stochastic process. We will include further modification in Section 3.3 in the revised manuscript.
>The related work section consists of only one paragraph, which may overlook some relevant research in the field.
Please refer to the *Related work* section in *Reply to all reviewers*.
>The ablation study of every component is required to demonstrate the effectiveness individually.
We provided the ablation results in Section D.1, supplementary materials in the original submission. Table 3 shows that by incorporating the regularisation loss $L_{\text{regularisation}}$ and the reconstruction loss $L_{\text{recovery}}$, the model performance has been improved --- in terms of all three different test metrics --- consistently across empirical data. Moreover, the additional losses are proven to be more effective when applied to high-dimensional time series.
>Regarding the sequential data, I am wondering how does this method work well on more challenging domains like video data since videos are also sequential in nature.
See the *Choice of dataset* section in *Reply to all reviewers*. | Summary: The paper looks at generative models for times series data. It proposes a new GAN method, based on a novel discriminator. The path characteristic function (PFC) is used as a representation of the time series distribution. Using this, a distance between two distributions is defined (PCFD) as well as a way to approximate it (EPCFD). Afterwards, it is shown how EPCFD can be used as a discriminator in a GAN scenario (resulting in the PCF-GAN). To scale it to larger input dimensions, the authors follow previous literature and introduce a parameterised dimensionality reduction of the inputs and introduce different losses in an effort to ensure that it behaves as expected. Finally, they conduct a number of experiments, showing that PCF-GAN compares favourably to competitive time-series GANs.
Strengths: The method and its different components was well motivated and presented.
The paper presents a compelling theoretical motivation for the method, giving evidence on why it might be preferable to alternatives s.a. WGAN.
The experimental results are well analysed and presented.
Weaknesses: The reconstruction functionality is not motivated, nor explained. It can be unclear to the unfamiliar reader why one would want it, given that we can just generate images.
Related work - similar work is only lightly touched upon and the advantages/disadvantages are not outlined, making it harder to compare.
The loss function in Eq. 9 is not arrived at in a principled way and has two hyper-parameters which might be difficult to tune?
In the experiments, the dimensionality of the inputs, as well as the length of the sequences appears rather low. This suggests potential scalability issues?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: In Eq. 9, do you expect training only on “ -L_recovery - L_regularisation” to perform much worse?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The method has computational limitations which limits the number of dimensions which it can be applied to. As mentioned before, an embedding functions is trained to reduce the dimensionality of the inputs, however, if the effective dimensionality is high, the method would fail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address all the questions in detail as follows.
>The reconstruction functionality is not motivated, nor explained. It can be unclear to the unfamiliar reader why one would want it, given that we can just generate images.
Time series reconstruction potentially has broad applications in privacy preservation [1] and in extracting semantical representation for downstream tasks via latent embedding
[2]. The reconstruction functionality allows us to share the trained generative model and the embedding of real data as the surrogate of actual data. The separation of model and embedding reduces the risk associated with data sharing while still permitting high-quality reconstruction. We will add this to the introduction in the revised version.
>Related work - similar work is only lightly touched upon and the advantages/disadvantages are not outlined, making it harder to compare.
Refer to the *Reply to all reviewers*.
>The loss function in Eq. 9 is not arrived at in a principled way and has two hyper-parameters which might be difficult to tune?
The loss function in Eq. (9) incorporates two additional terms $L_{{\rm recovery}}$ and $L_{{\rm regularisation}}$ as penalty components. They are used to encourage the constraint $L_{{\rm recovery}}=0$ and $L_{{\rm regularisation}}=0$ for preserving injectity of the embedding $F_{\theta_f}$. The inclusion of penalty terms to transform constrained optimisations into unconstrained ones is a standard technique, exemplified in Lasso regression and Ridge regression. Empirically, the values of two hyperparameters $\lambda_1$ and $\lambda_2$ can affect the training process and should be carefully chosen via grid search. Heuristically, the optimal $\lambda_1$, $\lambda_2$ are chosen when the magnitude of three losses are comparable.
>In the experiments, the dimensionality of the inputs, as well as the length of the sequences appears rather low. This suggests potential scalability issues?
In our experiments, the time dimension of RV dataset could be as high as 200, whereas the path dimension of EGG is 14 (at least moderate or high). More importantly, the computational complexity of the PCFD used in our PCF-GAN is linear in time and dimensions, so the scalability issue is not expected for high-dimensional time series. Specifically, let $\{T,d\}$ and $m$ be $\{time, path\}$ dimension of time series and matrix order of EPCFD, resp. Computation of PCFD loss $\sim \mathcal{O}(Tdm^2)$, which is linear in both time and storage, hence is scalable from the computational perspective. The choice of hyperparameter $m$ may not be directly related to $T$ and $d$. With the embedding layer incorporated, the PCFD loss computation is linear in output dimension of the embedding layer. Please see our reply to **Limitations** section for additional comments.
Questions:
>In Eq. 9, do you expect training only on “ -L\_recovery - L\_regularisation” to perform much worse?
Yes, training solely on $-L_{\rm recovery} - L_{\rm regularisation}$ leads to significantly worse performance. These two components only ensure the injectivity of the embedding $F$, but are unable to match $F(G(Z))$ and $F(X)$ without including $L_{\rm generator}$. We ran a simple experiment with only $-L_{\rm recovery} - L_{\rm regularisation}$; the training completely failed from the beginning.
Limitations:
>The method has computational limitations which limits the number of dimensions which it can be applied to. As mentioned before, an embedding functions is trained to reduce the dimensionality of the inputs, however, if the effective dimensionality is high, the method would fail.
Our model is scalable with respect to the dimension of embedding output. However, in the case that a large matrix size $m$ is essential for the training of some high-fidelity generative model, the current PCF-GAN might experience computational bottlenecks. In this regard, incorporating the embedding layer effectively reduces the need for large $m$ (see numerical examples in our paper). It is also worthwhile to investigate low-rank approximation of relevant matrices or combine other advanced GAN model architectures to enhance the scalability of our PCF-GAN. We will add a brief discussion on it to "Limitation and Future work" section.
[1] Rastogi, Vibhor, and Suman Nath. "Differentially private aggregation of distributed time-series with transformation and encryption." In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, pp. 735-746. 2010.
[2] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014). | Summary: The paper proposes an approach to improve time-series modeling in Generative Adversarial Network framework by using path characteristic function as the embedding of the time series sample. It explains the feasibility of PCF distance and how to integrates it as a distance measure between two time series data in order to help the discriminator learn better generative features. The goal and benefit of the proposed method is to empirically improve the generation quality of time series data.
Strengths: The paper explores using a novel loss function in Generative Adversarial Network framework. It computes the proposed empirical path characteristic function distance between generated and given sample to learn better time series generative features. The proposed method outperforms other generative baselines on the given datasets. The paper is well written ( and includes the codebase in appendix for reproducibility).
Weaknesses: It would be great to have to following experiments in order to compare fairly with the existing baselines:
1. Comparison with existing baselines like COT-GAN on metrics like
(a) the sum of the absolute difference of the correlation coefficients between channels avg over time
(b) absolute difference between the correlation coefficients of real and generated samples.
2. Comparison on high dimensional dataset (like Sprites and human action sequences with FID, KID scores )
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: As mentioned in the weakness section, having more ablations and experiments for comparison would help the reader better understand the efficacy of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address all the questions in detail as follows.
>Comparison with existing baselines like COT-GAN on metrics like (a) the sum of the absolute difference of the correlation coefficients between channels avg over time (b) absolute difference between the correlation coefficients of real and generated samples.
We have included the two metrics on (a) auto-correlation and (b) cross-correlation with different lags as additional test metrics to evaluate the quality of the generative models. Please see the *test metrics* section in 'Reply to all reviewers'.
>Comparison on high dimensional dataset (like Sprites and human action sequences with FID, KID scores)
We refer to the discussion on the *choice of datasets* section in *Reply to all reviewers*.
*Ablation study*
Regarding the comments in the Limitations section, the ablation results have been provided in Section D.1 of supplementary materials in the original submission. Table 3 shows that by incorporating the regularisation loss $L_{{\rm regularisation}}$ and the reconstruction loss $L_{{\rm recovery}}$, the model performance has been improved --- in terms of all the three different test metrics --- consistently across empirical data. Moreover, the additional losses are proven to be more effective when applied to high-dimensional time series.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks a lot for addressing my concerns and providing the quantitative comparison with other baselines.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your confirmation! | Summary: This paper proposes a path characteristic function GAN, named PCF-GAN, for learning to generate time-series data. More specifically, the authors mainly employ the rough path theory to build the PCF distance, such that the temporal cues can be encoded by unitary features, enabling the PCF to learn sequential data. The PCF distance is then proved to be complete and favorable for minimizing the difference between two stochastic processes. Even though, an encoder-decoder-based embedding is proposed to compare two random processes in the latent space of reduced dimensions, catering for the ease of training in practice. Experimental results have also verified the effectiveness of the proposed PCF-GAN.
Strengths: The established theory on PCF is novel and complete, by far my understanding. The authors also propose a practical way of minimizing the PCF distance, by using an encoder-like embedding that is able to reconstruct signals.
Weaknesses: 1. I am appreciated the theory behind the PCF. However, the usage of path theory needs to be clarified, when aiming at improving characteristic functions to represent stochastic process. The authors claim that CF-metric fails to capture temporal dependency of sequential data, which is true. However, why using path theory on characteristic functions can well address the temporal dependency, given the fact that there are other representations of the characteristic function of a stochastic integral?
2. The authors claim that the embedding operation does not have any gradient constraints. However, when adding calculating the EPCFD after the embedding layers, the continuous and differential properties may not hold, without the restriction on the embedding layers. I am not sure whether the reconstruction regularization can compensate strictly. The authors are encouraged to elaborate more on this.
3. Another weakness is regarding the experimental validations, which were performed based on low-dimensional time-series datasets. The PCF-GAN is shown to beat the COT-GAN on these datasets. However, COT-GAN is also able to generate short video clips, in which the scenarios are much complicated. Would the PCF-GAN scale well to the higher-dimensional generation tasks, such as short video generation? Also the comparing baselines are not the state-of-the-art. It is the fact that many recent methods have been reported to surpass COT-GAN and TimeGAN, for example, [1] and [2].
[1] Seyfi, Ali, Jean-Francois Rajotte, and Raymond Ng. "Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)." Advances in Neural Information Processing Systems 35 (2022): 32777-32788.
[2] Jarrett, Daniel, Ioana Bica, and Mihaela van der Schaar. "Time-series generation by contrastive imitation." Advances in Neural Information Processing Systems 34 (2021): 28968-28982.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In Equation (4), why x_0=I_m. Wouldn't it be y_0=I_m for the differential equation?
2. In Line 254, what does it mean by IGM? Should it be IPM?
3. Why for the regularization loss, this paper employs a different \theta'_M? How \theta_M and \theta'_M were implemented in this paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please see my weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address all the questions in detail as follows.
**1. Temporal dependency**
The path characteristic function (PCF) is proven to characterize the law of the stochastic processes, or time series. Hence, PCF completely determines the temporal dependency, *e.g.*, statistics like auto-correlation. The fundamental mechanism lies in the non-commutativity of matrix multiplication --- the multiplication in the Lie groups used to define PCF --- which echoes the non-interchangeability of the temporal order of events in time series. Note that the PCF deals with the measures on the *$\infty$-dimensional* space of paths, while the usual characteristic function (CF) is defined for $\mathbb{R}^d$-valued random variables. The latter cannot be directly extended to the $\infty$-dimensional setting mainly because the range of the exponential map used in CF is $i \mathbb{R}$, which is commutative.
There might be other representations characterizing the law on the path space; however, to the best of our knowledge, the PCF is the only such representation that has been proven characteristic, uniformly bounded, and differentiable.
**2. Embedding layers**
We thank the reviewer for pointing out the potential restrictions caused by embedding layers. Some clarifications and further explanations are as follows:
1. We concur with the reviewer on the need for the differentiability assumption of the embedding layer. As long as the embedding map is differentiable, the continuity and differentiability properties (Theorem 3.6) hold after applying the embedding layers. In fact, it has been taken for granted that most of neural networks used for the embedding map are differentiable, so as to ensure the validity of gradient descent algorithms. We shall clarify this point in the revised manuscript.
2. Indeed, gradient constraints are required by various GANs models, such as Wasserstein GANs, in order to impose additional regularity or add gradient norm upper bound on the critic function/embedding layer. In these GAN models, the distance may become unbounded in absence of further constraints. However, the PCFD distance between any two distributions on the path space is *uniformly* bounded by $2m^2$, as proved in Lemma 3.5. Therefore, when applying EPCFD on the embedded distributions, the uniform boundedness assumption holds automatically.
3. In our work, the reconstruction and regularization losses are not introduced to address the issues of gradient constraints, continuity, or differentiability --- these have been taken care of naturally by the aforementioned mathematical properties of PCFD. Instead, the reconstruction and regularization losses aim at imposing injectivity on the embedding function $f$, so that $X \stackrel{d}{=} Y$ whenever $f(X) \stackrel{d}{=} f(Y) $ for any path-valued random variables $X$ and $Y$ (here $\stackrel{d}{=}$ means equality in distributions). The following trivial example validates the assumption on the injectivity on $f$: if $f$ is simply a constant function, then no matter whether distributions of $X$ and $Y$ agree or not, one has $\mathrm{PCFD}(f(X),f(Y)) = 0$.
**3. Experimental validations**
To address this question, we first refer to the discussions on comparison with existing literature and choice of datasets in *Reply to all reviewers*. Besides, in what follows let us elaborate on our method in the context of Refs. [2, 3].
The main objective of this work is to propose a novel distance metric to improve the quality of GANs for time series generation. In contrast, both [2] and [3] proposed new frameworks for generating sequential data, which may not be suitable to benchmark directly with our method. But it would be interesting to incorporate our methods within their framework. Specifically, [2] proposed a framework to tackle multivariate time series by comparing each channel of the real and fake times series and then comparing across all channels globally. Within this framework, PCFD can be used as a distance measure --- in place of the average cross-entropy loss across time --- for both channel-wise and global comparisons. Similarly, for [3] we may replace the average cross-entropy loss by PCFD in their min-max objective function (see Equation (11) in [3]).
Questions:
**1.** Yes, this is a typo. We will fix it accordingly.
**2.** IGM stands for Implicit Generative Model. It was our mistake for omission on its clarification. We will modify this part accordingly.
**3.** The generator loss is the EPCFD distance between the reconstructed noise $F_{\theta_f}(G_{\theta_g}(Z))$ and $F_{\theta_f}(X)$. In contrast, the regularization loss is the EPCFD distance between the noise $Z$ and $F_{\theta_f}(X)$. Although the reconstructed noise converges to the noise distribution eventually, there is no guarantee that they are the same throughout the optimization process. Thus these two losses may have distinctive trainable parameters.
We initialize $\theta_M$ and $\theta_M'$ following the method described in the supplementary material, Section B.4.1 in the original submission. The optimization for $\theta_M$ follows the procedure in [4]. Note that the parameter $\theta'_M$ has been initialized and optimized independently of $\theta_M$.
[1] Chevyrev, Ilya, and Terry Lyons. "Characteristic functions of measures on geometric rough paths." (2016): 4049-4082.;
[2] Seyfi, Ali, Jean-Francois Rajotte, and Raymond Ng. "Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)." Advances in Neural Information Processing Systems 35 (2022): 32777-32788.
[3] Jarrett, Daniel, Ioana Bica, and Mihaela van der Schaar. "Time-series generation by contrastive imitation." Advances in Neural Information Processing Systems 34 (2021): 28968-28982.
[4] Lou, Hang, Siran Li, and Hao Ni. "Path Development Network with Finite-dimensional Lie Group Representation." arXiv preprint arXiv:2204.00740 (2022).
---
Rebuttal Comment 1.1:
Comment: The authors rebuttal has addressed my concerns on the rationale on using PCF, together with the associated gradient constrains. The authors are also encouraged to indicate whether they used any gradient penalty or spectrum norm in practice. For the experiments, I still feel quite weak evaluations on the proposed method. I am keeping my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your prompt reply. We are pleased to hear that we addressed your questions regarding the methodological aspect of our work. We don't use any gradient penalty or spectrum norm in our proposed PCF-GAN in numerical experiments. This is underpinned by the theoretical properties of the PCF-GAN. With regard to evaluation, the time constraint of the discussion period prevents us to conduct thorough numerical experiments on video data. Nevertheless, We want to draw your attention that the primary objective of our work is to introduce a novel and principle discriminator for time series generation, backed up by theoretical properties along with the empirical efficacy shown in proper benchmarking results. This PCF discriminator can be flexibly integrated with a variety of GAN frameworks, offering the potential to achieve state-of-the-art results on more challenging real-world time series data. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful comments and constructive suggestions. We are pleased that all the reviewers find our work novel, sound, and theoretically motivated. We also acknowledge the shared questions from the reviewers on related work and numerical evaluation.
**Comparison to existing literature**:
- Related work: We will extend the related work section in the revised version by adding discussions on other state-of-the-art time series methods, including the comparison with our model. We plan to incorporate additional GAN models with autoencoders (*e.g.*, GT-GAN). We shall also delve into strategies for improving GAN training for time series, highlighting innovative approaches such as additional conditional loss and SDE-based neural networks (Deep Euler representation [2]).
- Baselines: A main contribution of our PCF-GAN lies in the innovation of the discriminator for time series generation, built upon the principled mathematical representation PCF. To validate its effectiveness, we focus on those state-of-the-art models with discriminators tailored for time series distribution. We thus choose RCGAN and TimeGAN (which use the average cross-entropy loss over time as the discriminator) as well as COTGAN (whose discriminator loss based on causal optimal transport). For a fair comparison, we use the same generator across all GAN models in our numerical experiments.
We acknowledge that several other time series generative models (GT-GAN, COSCI-GAN and EWGAN) pointed out by the reviewers may have superior performance, on certain benchmarking datasets, than the baselines adopted in our paper. Nevertheless, these models focus mainly on network framework and generator architecture, while our work emphasises the innovation of a discriminator. Thus, benchmarking our method with [1, 2, 3] as suggested by **cYLd** and **BgyV** appears not directly applicable.
- Future work. Although our numerical experiments do not include the comparison with the additional baselines, it is certainly of great interest to explore how PCFD could be incorporated into these more recent and state-of-the-art GANs for time series generation as future work. Indeed, as a distance metric on time series, PCFD can be flexibly incorporated with other advanced generators of time series GAN models, hence may further improve the performance. For example, one can replace the average cross-entropy loss used in [1, 3] and the Wasserstein distance in [2] by PCFD, with some simple modifications on the discriminators. Such considerations will be elaborated in the future work section, highlighting the flexibility of our proposed PCF-GAN.
**Evaluation**:
- Test metrics. In response to the reviewers' suggestions, we have included several additional commonly-used test metrics to evaluate the quality of the generative models from different perspectives. (1) Fitting of temporal dependency. We adopt auto-correlation metrics used in COT-GAN and cross-correlation metrics used in Quant-GAN, as suggested by Reviewer EJ2R. These are classical statistical tools to measure the temporal dependency of time series within and across channels. (2) Fitting of marginal distribution. Here we adopt the distributional metric in Quant GANs [4]. We benchmark our method against RGAN, COT-GAN, and TimeGAN. Results in the supplementary PDF show that PCF-GAN significantly outperforms baselines. We will include the additional results of these test metrics in the appendix of the revised manuscript.
- Choice of datasets. To demonstrate the effectiveness of our proposed method, we have benchmarked several sequential datasets from different domains with various characteristics, as summarised in Table 1. We acknowledge that it is of interest to validate our methods on more complex sequential data, such as video and human action. Nonetheless, training generative models on these tasks often requires tailored network architecture (*e.g.*, using deep convolution modules to learn spatial dependency within each frame of video) or more advanced frameworks, which lies out of the main focus of this paper. Furthermore, to our knowledge, these datasets are not commonly used for benchmarking generative models for sequential data (with the exception of COT-GAN, which included a short clip of video data). That said, it is certainly an intriguing direction to incorporate PCFD into complex model architectures/frameworks to tackle more intricate sequential data. This will be addressed in the future work section of the revised manuscript.
[1] Jeon, Jinsung, Jeonghak Kim, Haryong Song, Seunghyeon Cho, and Noseong Park. "GT-GAN: General Purpose Time Series Synthesis with Generative Adversarial Networks." Advances in Neural Information Processing Systems 35 (2022): 36999-37010.
[2] Remlinger, Carl, Joseph Mikael, and Romuald Elie. "Conditional loss and deep Euler scheme for time series generation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 7. 2022.
[3] Seyfi, Ali, Jean-Francois Rajotte, and Raymond Ng. "Generating multivariate time series with COmmon Source CoordInated GAN (COSCI-GAN)." Advances in Neural Information Processing Systems 35 (2022): 32777-32788.
[4] Wiese, Magnus, Robert Knobloch, Ralf Korn, and Peter Kretschmer. "Quant GANs: deep generation of financial time series." Quantitative Finance 20, no. 9 (2020): 1419-1440.
Pdf: /pdf/ed49b89b5b3ea44928c6b2de3d0b3a6df7137311.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents a new metric for the distributions on the path space via PCF and provides theoretical proofs for analytic properties of the proposed loss metric which benefits GAN training. It introduces a novel PCF-GAN to generate & reconstruct time series simultaneously. It compares the proposed method with prior work and shows improved performance.
Strengths: - The paper is relatively well-written and the proposed approach is described clearly.
- The authors provide theoretical grounding for their work. They prove PCF's characteristicity, boundedness, differentiability with respect to generator parameters, and its weak continuity which ensures the stability and feasibility of training the PCF-GAN.
- Experimental results demonstrate improved performance compared to several prior works. The authors provide comparisons with TimeGAN, CotGAN and RGAN on RV, Stock, Air and EEG datasets using discriminative and predictive metrics.
Weaknesses: - There are concerns regarding missing comparisons with related work. [A, B] also propose generative models for time series generation. [A] combines the adversarial training of GANs and the exact maximum likelihood training of CTFPs into a single framework. It designs an invertible generator, and adopt an autoencoder, on whose hidden space of the GAN performs the adversarial training. [B] uses Deep Euler representation and Wasserstein distances to propose three generative methods for times series. Two generative method EWGAN and EDGAN demonstrate an accuracy similar to state-of-the-art GAN generators and show better performance for capturing temporal dynamic metrics of the time series. The third method CEGEN is based on a loss metric computed on the conditional distributions of the time series. These papers are not mentioned and no comparisons are provided with them.
- There are metrics such as FID in prior work, e.g. [B], which are not used in the experiments.
[A] GT-GAN: General Purpose Time Series Synthesis with Generative Adversarial Networks; Jeon et al.;
[B] Conditional Loss and Deep Euler Scheme for Time Series Generation; Remlinger et al.;
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The authors need to clearly distinguish their work with existing literature and provide comparisons with prior work (e.g. [A, B]).
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately discussed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address all the questions in detail as follows.
**Comparisons with related work**
We first refer to the comparison to the existing literature in *Reply to all reviewers*. Then we further elaborate on the comparison between our method and [A, B].
[A] proposed a GAN framework to generate time series that is robust to missing data, making extensive use of continuous-time process modules therein. In particular, their framework incorporates CFTP as generator, NCDE as the bottleneck architecture for auto-encoder and critic function, and average cross-entropy loss over time as objective function. In contrast, our work aims to improve GAN's training by introducing and incorporating PCFD, a novel discriminator. Thus, in view of the essential differences in the architectures of generator and critic, we think it is not suitable to directly benchmark our model against the framework in [A]. We do acknowledge, however, that it is a very interesting direction to incorporate PCFD into the GAN framework in [A]. For instance, one may replace the discriminator in [A] by our proposed PCFD. As PCF relies on the unitary feature arising from controlled differential equations (*e.g.*, NCDE; see [C]), we conjecture that this operation will enhance the model's robustness to missing data, hence improving the performance compared to the traditional cross-entropy loss used in [A].
[B] used the Euler discretization scheme of SDEs as a generator and the Wasserstein metrics as an objective function. For a fair comparison, one should compare PCFD in our work to the Wasserstein metrics in [B]. The use of Wasserstein metrics typically requires a 1-Lipschitz critic function, which entails further constraints on the gradient of the critic function. In contrast, in our work, further constraints are not needed at all, thanks to the boundedness of PCFD. It would be very interesting to numerically benchmark our method to [B] and analyze the performance of the integrated model based on SDE Euler discretization and PCFD. But as their codes are not publicly available and re-implementation of their model from scratch within a week appears unfeasible, at the moment this is beyond our scope. One minor point: by [B], two generative methods EWGAN and EDGAN (besides CEGEN) consistently underperform against COT-GAN.
An important innovation of CEGAN is the use of conditional loss, which leads to superior performance ([B]). It would be interesting to substitute the $W_2$ metric by PCFD therein to match the distribution of $\mathbb{P}[X_{t+1} | X_{t}]$. More importantly, this enables us to extend the loss to match the conditional law of Step $1$ and Step $q$, *i.e.*, $\mathbb{P}[X_{t+1: t+q} | X_{t}]$. We will briefly mention it in the future work section.
**Additional evaluation metrics**
The FID score, commonly used in image generation, requires a pre-trained model to map each sample to a high-dimensional vector. For time series data, the choice of appropriate outputs or labels for pre-trained models encompasses ambiguity. In addition, the FID score requires the assumption of normality to avoid bias ([D]). Meanwhile, in our humble opinion, FID is not widely used in time series generation literature (*cf.* works on GT-GAN [A], TimeGAN, and COT-GAN). Taking the above into consideration, together with the fact that [B] did not provide implementation details or code for the FID score, we decide not to include this metric in our experiments.
Instead, we consider three additional metrics: auto-correlation, cross-correlation and marginal distribution metrics. They are intended to assess the temporal dependency, spatial dependency, and marginal fitting via classical statistics, respectively. See the discussion on the evaluation section in *Reply to all reviewers* and the attached supplementary PDF.
[A] Jeon, Jinsung, Jeonghak Kim, Haryong Song, Seunghyeon Cho, and Noseong Park. "GT-GAN: General Purpose Time Series Synthesis with Generative Adversarial Networks." Advances in Neural Information Processing Systems 35 (2022): 36999-37010.
[B] Remlinger, Carl, Joseph Mikael, and Romuald Elie. "Conditional loss and deep Euler scheme for time series generation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 7. 2022.
[C] Lou, Hang, Siran Li, and Hao Ni. "Path Development Network with Finite-dimensional Lie Group Representation." arXiv preprint arXiv:2204.00740 (2022).
[D] Heusel, Martin, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. "GANs trained by a two time-scale update rule converge to a local Nash equilibrium." Advances in Neural Information Processing Systems 30 (2017). | null | null | null | null | null | null |
Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond | Accept (spotlight) | Summary: The article theoretically bounds the estimation error of NCE depending on the path. The paper first shows that NCE has minimum variance among all estimators using K bridging distributions and then continues by showing that the standard polynomial path can achieve polynomial error in the limit of infinitely many bridging distributions. Then the authors derive first a negative result for the arithmetic path and a positive result on the oracle estimate of the path that uses the unknown normalisation constant. The results are empirically verified.
Strengths: The authors responses clarified the main weaknesses and I am raising my score accordingly.
--------------------------
The paper appears to be well written and some of the results appear to be novel, especially the error bounds. I have not checked the math in the appendix.
The presentation is rather clear and to the point and the experiments nicely validate the theorems.
Weaknesses: - Some related work is missing. importantly, NCE is a method that got discovered and rediscovered several times, see for example this review that calls the same method BAR (Bennets acceptance ratio):
[1] Krause, Oswin, Asja Fischer, and Christian Igel. "Algorithms for estimating the partition function of restricted Boltzmann machines." Artificial Intelligence 278 (2020): 103195.
- As a result of this, Theorem 1 was already known, as [1] reproduces an earlier proof that shows that NCE is a maximum likelihood estimator. Since MLEs are efficient, their variance is bounded by the variance of the Fisher matrix. In that light, the novelty of Theorem 1 is to produce a bound on the Fisher matrix via the path integral. Similarly, [1] already proposed a framework that included the generalisation of several estimators, similar to the Bregman methodology proposed in this work.
- The practical applicability of the results is questionable. While the work shows that the two step estimator works well given perfect samples, it might be very challenging to obtain good samples for the arithmetic path. This is because the modes that appear are disconnected from each other from the very beginning on, which makes it difficult to discover them using MCMC, unlike for example parallel tempering with the polynomial path (see [1]).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would suggest to discuss [1] in the article due to its relevance and clear overlap. A few of the references therein might be important as well.
Q1: you stress in the article that your proposed two-step method is not efficient. Do you foresee a way to make it efficient? Is it possible to create a larger experiment on a real estimation task that shows that it can improve on previous estimates?
Finally, on page 5 line 165 where you quote [29], I would propose to also mention Deep Belief networks as an important model class with universal approximation properties for which estimation the normalisation constant is relevant:
Krause, Oswin, et al. "Approximation properties of DBNs with binary hidden units and real-valued visible units." International conference on machine learning. ICML, 2013.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and pointing out relevant references. In the following, we will answer the reviewer’s comments point-by-point. We hope we address their concerns and hope the reviewer will consider raising their score.
**“I would suggest to discuss [1] in the article due to its relevance and clear overlap. A few of the references therein might be important as well.” —** The references [1, 2] suggested by the reviewer are indeed relevant, and we will be sure to add them in the camera-ready version of the paper!
We agree with the reviewer that NCE has been rediscovered under many names, for estimating the normalization constant [3, 4, 5] of an unnormalized model, as pointed out by that reference and some other works [Section 3.2., 6]. Moreover, eq. 5 of [1] uses a similar identity to eq. 1.4 of [7] to generalize importance sampling to a family of estimation methods which includes NCE, also similar to [6].
**“Theorem 1 was already known” —** we agree that a small part of Theorem 1 is known — and we point this out in the paper as well: “In the binary setup [...], the NCE loss is optimal [6, 7] ”. There are several novel parts to Theorem 1 however: (1) we extend the result to a sequence of distributions; (2) we show that the optimality gap between NCE and IS vanishes in the continuous path limit; (3) we provide a closed-form expression of the MSE in that limit.
**“The practical applicability of the results is questionable” —** we agree with the reviewer that bridging the gap between theory and practice is an important challenge; we simply point out that it is an issue for the entire literature. [8] point out that “Due to the difficulty of establishing general results under arbitrary sampling schemes, we shall assume independent draws for theoretical explorations and guidelines”. Similarly, [9] remark that “the [...] analysis assumes perfect transitions which can be unrealistic in practice because many distributions of interest have separated modes between which mixing is difficult.” A number of methods have been developed to deal with sampling issues, including parallel tempering that is mentioned by the reviewer, and tempered transitions. In fact, the path cost for both these methods, parallel tempering [eq. 17, 10] and tempered transitions [eq. 18, 11], are equal (or upper bounded) by a a sum of f-divergences which in the limit of a continuous path is the same cost function as in our Theorem 1. This suggests our results may be applicable to more practical methods in the literature.
**“You stress in the article that your proposed two-step method is not efficient. Do you foresee a way to make it efficient?” —** Our two-step method uses an estimate of the target normalization to reparameterize the arithmetic path. It is not clear how the estimation error of the normalization propagates into an error in the path it reparameterizes. In future work, we hope to look more closely at how our optimality results can be brought to scale.
[1] Krause et al. “Algorithms for estimating the partition function of restricted Boltzmann machines.” Artificial Intelligence, 2020.
[2] Krause et al. “Approximation properties of DBNs with binary hidden units and real-valued visible units.” ICML, 2013.
[3] Geyer. “Estimating Normalizing Constants and Reweighting Mixtures”. Technical Report No. 568, School of Statistics University of Minnesota, 1994.
[4] Bennett. “Efficient estimation of free energy differences from Monte Carlo data”. Journal of Computational Physics, 1976.
[5] Gutmann et al. “Noise-Contrastive Estimation of Unnormalized Statistical Models [...]”. Journal of Machine Learning Research, 2012.
[6] Chehab et al., “Optimizing the Noise in Self-Supervised Learning [...]”. Arxiv, 2023.
[7] Meng et al. “Simulating Ratios of Normalizing Constants Via A Simple Identity”. Statistica Sinica, 1996.
[8] Gelman et al. “Simulating normalizing constants: From importance sampling to bridge sampling to path sampling”. Statistical Science, 1998.
[9] Grosse et al. “Annealing between distributions by averaging moments”. NIPS, 2013.
[10] Syed et al. “Parallel tempering on optimized paths”. ICML, 2021.
[11] Behrens et al. “Tuning tempered transitions”. Statistics and Computing, 2012.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I think some of my questions have been answered, but not the most important ones.
I still think the authors have still not acknowledged the scope of reference [1] suitably. You write:
**we agree that a small part of Theorem 1 is known — and we point this out in the paper as well: “In the binary setup [...], the NCE loss is optimal [6, 7] ”. There are several novel parts to Theorem 1 however: (1) we extend the result to a sequence of distributions;** [...]
[1] already includes the derivation of the result beyond a pair of distributions. To be more exact, the authors consider pairs of forward and reverse paths through the set of distributions and then derive NCE for those paths via maximum likelihood. To the reviewers understandng, this is exactly the setting considered by the article. While points (2) and (3) appear to be novel, this distinction that (1) is not novel should be properly credited.
Regarding the practical applicability: i do not see it as an issue that the theoretical work requires perfect samples and I am not assigning any negative points for that. My main point here is that I think that the assumption needs to be reviewed and discussed in terms of its meaning in the two settings. To make it clear: **i believe that the arithmetic path requires a stronger sample oracle and the difference in asymptotic behaviour is a result of that the arithmetic path moves more of the complexity into the sampler**.
I am willing to raise my score if these points are discussed in the article.
---
Reply to Comment 1.1.1:
Title: Answer to Reviewer peRm
Comment: We thank the reviewer for pointing out these two points and we will discuss them in the camera-ready version of the paper. We hope this addresses the remaining concerns.
- **optimality of the NCE estimator for a sequence of distributions**
We have reread the reference [1] provided by the reviewer: the optimality of the NCE estimator is indeed extended to a sequence of distributions in eq 16 of [1]. **We propose modifying the sentence preceding our Theorem 1 to the following**: “This optimality result has been extended to a sequence of distributions K > 1 [eq. 16, 1]. We show that in the limit of a continuous path, the gap between annealed IS and annealed NCE is closed and we provide an expression of the estimation error in that limit”.
On a side note, note that there is a subtle difference between the frameworks of [1] and of our paper:
- in our paper, what is estimated are the log-ratios **directly** [eq 1 our paper]
- in [1], what is estimated are the ratios [eq 15, 1] and **then** the log can be applied
Both are valid estimators of the log-ratios, but it is not clear that they are the same nor that they have the same MSE. Therefore, showing that the NCE loss is optimal in the MSE sense could be a slightly different result for both cases. It turns out we can show that both estimators do have the same MSE and are happy to include the derivations in the appendix.
[1] Krause et al. “Algorithms for estimating the partition function of restricted Boltzmann machines.” Artificial Intelligence, 2020.
- **optimality of the arithmetic path**
As we understand, the reviewer highlights that the sampling task algorithmically depends on the parameterization of a path, as the parameterization defines which sequence of intermediate distributions will need to be sampled from. In the case of the arithmetic path, the optimal parameterization requires an oracle for calculating Z: this means we cannot sample from the optimal arithmetic path before knowing Z (at least approximately) and this makes the problem potentially computationally more difficult. Is that a fair assessment of the reviewer’s comment?
We agree and do in fact say in the draft that the optimality of the arithmetic path is highly dependent on having an oracle for Z, for example in the paragraphs preceding Theorems 4 and 5. **We are happy to further clarify this and propose to add the following text:** “Note that the optimality of the arithmetic path requires a re-parameterization in terms of the target normalization: this means that the optimal arithmetic path cannot be sampled from without such an oracle. Such an oracle might be in some instances computationally difficult to implement.” | Summary: This paper investigates the benefits of using bridge distributions to estimate the unknown normalizing constant $Z_1$ of a given density $p_1 = f_1 / Z_1$, with $f_1$ known. There are many ways for estimating a normalizing constant, e.g. importance sampling, bridge sampling, umbrella sampling and noise contrastive estimation. For example, in importance sampling an instrumental density $p_0$ from which it is easy to sample is introduced and the normalizing constant is estimated by weighting the samples from $p_0$ according to the ratio $f_1 / p_0$. Naturally, the accuracy of the estimation depends on how far $p_0$ is from $p_1$ (in Rényi 2-divergence for example or KL), and in most practical cases $p_0$ will be in fact very far from $p_1$. A nice idea then consists in introducing bridge distributions that "simplify" the estimation; we introduce a sequence of distributions $p_k$ (which themselves have intractable normalizing constants) such that the discrepancy between $p_{k+1}$ and $p_k$ is small and use them to estimate the normalizing constant. There are many ways for choosing such a sequence of distributions including geometric path, arithmetic path.
This paper investigates the effect of each design choice: the estimator (they challenge the default choice of IS) and the sequence of bridging distributions. More specifically, their contributions are the following:
- They show that the NCE loss is optimal, whatever the number $K$ of bridging distributions and thus showing that IS is in fact not optimal. This extends the result for $K = 1$.
- The performance of IS is known to suffer from exponential dependence on the dimension and this phenomenon is now sharply quantified. In this paper it is shown that this is also the case for NCE at least for the exponential family, which hints that it should be also the case for more complex densities.
- The hope of using the bridging distributions was to diminish the effect of the dimensionality. It is shown that in the limit of infinite bridge distributions and with geometric paths, the error is no longer exponential but only polynomial.
- Finally, in the presence of an oracle providing the target normalizing constant $Z_1$, they show that the arithmetic path can provide an error independent of the dimension.
Strengths: I find the theoretical contributions of this paper original, interesting and significant. Indeed, I appreciate that the authors have deeply investigated some a priori knowledge that practitioners have, such as the optimality of IS and the effect of the annealed distributions. Furthermore, I very much appreciate that the theoretical results are validated empirically on toy examples.
Weaknesses: The assumptions made in the paper are very strong (perfect sampling for example) and some of the results are only asymptotic in the number of bridge distributions. This makes me doubt about the broad relevance of this paper to the NeurIPS community. Furthermore, although the proposed two step procedure was shown to be numerically optimal, it is not tested numerically in the realistic scenario where one has to resort to MCMC for sampling. I believe that there is no reason why this should not be tested and compared to more traditional methods. Otherwise it is practically useless.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: If I am not mistaken, the estimator $\hat{Z}_1$ in the first line of table 1 is not an estimator of $Z_1$ as $f_0$ is not normalized. Also, in eq. 4 and eq. 5 I believe that there should be $+ \log Z_0$.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and we are glad the reviewer finds the paper “original, interesting and significant”. We next address comments point-by-point and hope the reviewer will consider raising their score.
**The assumptions made in the paper are very strong [...] This makes me doubt about the broad relevance of this paper to the NeurIPS community. —**
While we agree with the reviewer that some of the assumptions we make are quite strong, we would also like to point out that they are fairly standard in the related literature. For example, [2] remark that their “analysis assumes perfect transitions which can be unrealistic in practice because many distributions of interest have separated modes between which mixing is difficult.” The assumptions of “perfect sampling” and of a “continuous path” are commonly made [1, 2, 3, 4] to make the theory tractable to obtain results that can serve as coarse guidelines. This also includes recent works in the NeurIPS and ICML communities [2, 3, 4]. We of course agree that it would be great to further close the gap between theory and practice, and hope that papers like ours will serve as a jumping-off point for such work.
**Typos —** the reviewer is correct about the typo in table 1: f0 should be p0 in order to obtain an estimator of the target normalization. Also, in eqs. 4-5 the plus sign in front of logZ0 vanished. Thank you for picking up on this.
[1] Gelman et al. “Simulating normalizing constants: From importance sampling to bridge sampling to path sampling”. Statistical Science, 1998.
[2] Grosse et al. “Annealing between distributions by averaging moments”. NIPS, 2013.
[3] Brekelmans et al. “All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference”. ICML, 2020.
[4] Goshtasbpour et al. “Adaptive Annealed Importance Sampling with Constant Rate Progress”. ICML, 2023.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. I believe that my comment on testing the two step procedure using MCMC sampling has not been addressed. I am interested in knowing if the authors have tested this and if they believe that it can bring any improvement wrt to more traditional methods (of course, a negative answer will not influence my score!).
---
Reply to Comment 1.1.1:
Title: Answer to reviewer q5r2
Comment: We thank the reviewer for their question: we have not yet thoroughly tested the two-step procedure using MCMC sampling. We expect the results to vary with the situation: depending on the MCMC sampler and on the target distribution, the mixing times would be different and so would the levels of statistical performance and mismatch to the theory.
That being said, we inspected the robustness of the two-step method when running experiments using perfect samples, and noticed behavior consistent with what we wrote above. Namely, we noticed that the performance of the two-step procedure strongly depends on the quality of the first step (ie how well Z is pre-estimated using the geometric path). | Summary: This paper studies annealing schedules used in noise contrastive estimation (NCE). In particular, it conducts rigorous theoretical analyses of existing annealing schedules used in NCE. The paper's framework is general and yields importance sampling, umbrella sampling, and bridge sampling as special cases. The theory suggests that NCE is more efficient in certain scenario compared to annealed importance sampling (AIS).
Strengths: Disclaimer: I am new to noise contrastive estimation, but familiar with the annealed importance sampling literature.
* The paper provides a rigorous analysis of different annealing schedules used in noise contrastive estimation.
* Previous methods for estimating normalizing constants are obtained as a special case, such as umbrella sampling, ration sampling, and bridge sampling.
* The paper shows that the popularly used geometric annealing schedule is suboptimal.
Weaknesses: Given my lack of expertise in terms of NCE, I will focus my comments on the claims about AIS.
* A major concern with the theoretical comparison against AIS is that the specific instance of AIS considered in this paper is known to be suboptimal. The analysis of AIS nowadays involves the "backward sampling path" formalism (first thoroughly explored in [1], but also tackled in Neal's paper [2]), where the paper's formulation of AIS corresponds to the "suboptimal $L$-kernel" (kernel implicitly used for backward sampling), where the suboptimality is already in the name. There exists attempts to obtain approximately optimal $L$-kernels [3,4].
* This is a minor point, but the fact that the AIS with the suboptimal $L$-kernel scales poorly (which is what is referred to as AIS in the paper) is already known. See p. 335 in [5], p. 11 in [6].
* As such, the claimed scope of the paper is way too broad compared to the actual contribution of the paper. I strongly recommend stating that this is a paper focusing exclusively on noise contrastive estimation in the title, abstract, and introduction.
* Also, I think the paper should be more subtle when comparing performance against importance sampling; with the backward sampling
formalism, you do not need to "assume" perfect sampling; you still have non-asymptotic performance guarantees through [7], and asymptotic guarantees through [1]. So not being able to capture this is theoretically quite critical.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ### Major Comments
* Line 40: As mentioned above, we do have some understanding on how to tune the performance of annealing-based normalizing constant estimation. In fact, although less formal, Neal's paper [2] contains a lengthy discussion on this. Why is this not mentioned?
* Line 189-190: "... there is no definitive theory on the ability of annealing to reduce the statistical cost in a general setup." Again, I believe [2] provides some theory, and with the backward sampling formalism, further analysis is possible as in [3], which is based on the results of [7].
* Line 136-137: This is true for importance sampling, but not necessarily for AIS. In fact, this point in the cited reference [27] is shown with only importance sampling. (Thus, the quotation needs to be corrected.) In AIS, with an optimal $L$-kernel, one can achieve the variance of $i.i.d$ samples regardless of the tails, although whether one can obtain an optimal $L$-kernel is something else.
* Line 65-79: On a similar note, this is a description specific to thermodynamic integration/bridge sampling and is not how newer AIS approaches do things nowadays. In fact, in AIS/SMC, you do not need to choose a path before sampling. You can adaptively determine the sequence based on the estimated quality of samples [6].
### References
Disclaimer: I am not the author or affiliated with the authors of the papers below.
* [1] Del Moral, Pierre, Arnaud Doucet, and Ajay Jasra. "Sequential monte carlo samplers." Journal of the Royal Statistical Society Series B: Statistical Methodology 68.3 (2006): 411-436.
* [2] Neal, Radford M. "Annealed importance sampling." Statistics and computing 11 (2001): 125-139.
* [3] Bernton, Espen, et al. "Schr\" odinger Bridge Samplers." arXiv preprint arXiv:1912.13170 (2019).
* [4] Doucet, Arnaud, et al. "Score-based diffusion meets annealed importance sampling." Advances in Neural Information Processing Systems 35 (2022): 21482-21494.
* [5] Chopin, Nicolas, and Omiros Papaspiliopoulos. An introduction to sequential Monte Carlo. Vol. 4. New York: Springer, 2020.
* [6] Dai, Chenguang, et al. "An invitation to sequential Monte Carlo samplers." Journal of the American Statistical Association 117.539 (2022): 1587-1600.
* [7] Agapiou, Sergios, et al. "Importance sampling: Intrinsic dimension and computational cost." Statistical Science (2017): 405-431.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and pointing out the relevance of SMC literature to this problem. As we understand, the reviewer broadly makes two points. First, our analysis assumes “perfect sampling” while SMC acknowledges “imperfect sampling” which is more realistic. Second, relevant results from the SMC literature could be better referenced. In the following, we answer these concerns and hope the reviewer will consider raising their score accordingly.
**Our analysis assumes perfect sampling** (the samples are drawn from the path) **while SMC acknowledges imperfect sampling** (the samples are drawn from an MCMC kernel possibly followed by resampling) **which is more realistic** — this is a good point, and we appreciate the chance to clarify. We would like to note that for inexplicit paths of distributions, analyzing the estimation error of AIS/SMC seems unclear and challenging [Eq 38, 1]. In particular, samples from MCMC will typically follow distributions that are not analytically tractable (since they are not easily describable in closed form), thus stronger assumptions seem needed for analysis. We make the assumption of “perfect sampling” and highlight that **this assumption is fairly standard** in the related literature [4, 5, 6, 7].
In fact, **many SMC results referenced by the reviewer also make the “perfect sampling” assumption**. It translates in the SMC literature as:
- a suboptimal “time reversal” L-kernel (sentence preceding eq 2.5 of [2])
- the convergence of MCMC steps along the path (assumption 3.1 in [2])
These assumptions are needed to make the estimation error depend explicitly on the path of distributions as in eq 3.2 of [2].
**We note that even with the “perfect sampling” assumption, existing results on how the estimation error scales with dimensionality are limited:**
- Section 3.3 of [2] provides an argument that annealing can reduce the statistical complexity to polynomial in the dimension, but that argument is largely heuristic and relies on restrictive assumptions such as an essentially log-concave path of distributions (assumption 3.4 of [2])
- Section 4 of [3] discusses the benefits of annealing based on a heuristic calculation
- Other works mentioned at the end of section 3 of [2], assume independent components of the target distribution which is very restrictive or refer to “relevant discussions” for polynomial dependence in the dimensionality without a definite theory.
**We are happy to include a detailed discussion on these results from SMC literature in the camera-ready version**, to complement lines 40 and 189-190 of our paper, and thank the reviewer for the relevant references they pointed out! To complement lines 65-79 of our paper, we will also discuss versions of AIS where the path is not explicitly defined as in thermodynamic integration, but adaptively defined “on the go” as in [7], as suggested by the reviewer.
The reviewer also mentions some works that do not make the “perfect sampling” assumption. Some of these works target the optimal “backward L-kernel” [8, 9]. We note however that while they report empirical results, they do not theoretically study the estimation error. There are important challenges to a theory that uses Monte Carlo estimates of the optimal “backward L-kernel”: for example, the estimation of the backward kernel (the Stein score in [9]) introduces variance that cannot be ignored and for which the analysis can be quite involved [10].
**Regarding the scope of this paper**, the cost function that we analyze is obtained from seminal AIS literature [4, 5] and we show in our Theorem 1 and this cost function applies just as well to annealed importance sampling, annealed noise-contrastive estimation, and annealed reverse-importance sampling, under the assumption of perfect sampling. We do agree with the reviewer that our analysis of AIS strongly relies on that assumption: we could change the title to something like “Provable benefits of annealing for estimating normalizing constants: IS, NCE and beyond” or “Provable statistical benefits of annealing for estimating normalizing constants”. Hopefully this clarifies that we separate the estimation error of AIS from the uncertainty in the sampling procedure which is “oracled away”.
[1] Del Moral. "Sequential monte carlo samplers." Statistical Methodology, 2006.
[2] Dai et al. "An invitation to sequential Monte Carlo samplers." Journal of the American Statistical Association, 2002.
[3] Neal. “Annealed importance sampling”. Statistics and computing, 2001.
[4] Gelman et al. “Simulating normalizing constants: From importance sampling to bridge sampling to path sampling”. Statistical Science, 1998.
[5] Grosse et al. “Annealing between distributions by averaging moments”. NIPS, 2013.
[6] Brekelmans et al. “All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference”. ICML, 2020.
[7] Goshtasbpour et al. “Adaptive Annealed Importance Sampling with Constant Rate Progress”. ICML, 2023.
[8] Bernton, Espen, et al. "Schrodinger Bridge Samplers." arXiv preprint arXiv:1912.13170 (2019).
[9] Doucet, Arnaud, et al. "Score-based diffusion meets annealed importance sampling." Advances in Neural Information Processing Systems 35 (2022): 21482-21494.
[10] Qin et al. “Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Markov Chains”. Arxiv, 2023.
---
Rebuttal Comment 1.1:
Title: Further Response
Comment: I sincerely thank the authors for the detailed response. I strongly believe that our discussions would strengthen and clarify the paper, given the importance and broadness of the field of normalizing constant estimation. I am very happy to update my score, given the constructive engagement of the authors.
> We make the assumption of “perfect sampling” and highlight that this assumption is fairly standard in the related literature [4, 5, 6, 7].
I agree that the perfect sampling assumption is standard in the literature on bridge sampling and thermodynamic integration. My concern is that AIS/SMC are very different from these. So I believe there is no disagreement here. But I see now (especially from Grosse *et al.*, 2013) that there are subtle differences in what people think is AIS.
> In fact, many SMC results referenced by the reviewer also make the “perfect sampling” assumption. It translates in the SMC literature as:
>
> * a suboptimal “time reversal” L-kernel (sentence preceding eq 2.5 of [2])
> * the convergence of MCMC steps along the path (assumption 3.1 in [2])
Yes, the optimal L-kernel, or "perfect sampling" assumption, has certainly been mentioned in the SMC literature. However, my point about the L-kernel is that the results of Agapiou *et al.* (2017) allow some theoretical understanding of non-optimal L-kernels as done by Bernton *et al.* (2019). And the CLT results of Del Moral *et al.* (2006) and the likes certainly provide a rigorous theory for arbitrary L-kernels. That is, SMC/(some)AIS papers "assume" perfect sampling in the sense that it ensures the best practical performance, but not for the theory to work.
To summarize, my concern is that the fact that the paper oracles away the sampling error reduces its impact/applicability to the SMC/(L-kernel-based)AIS. Given this, the conclusions against (A)IS are too strong. But I do not question the impact on other theoretical frameworks, such as thermodynamic integration (TI) and bridge sampling.
> Regarding the scope of this paper, the cost function that we analyze is obtained from seminal AIS literature [4, 5] and we show in our Theorem 1 and this cost function applies just as well to annealed importance sampling, annealed noise-contrastive estimation, and annealed reverse-importance sampling, under the assumption of perfect sampling. We do agree with the reviewer that our analysis of AIS strongly relies on that assumption: we could change the title to something like “Provable benefits of annealing for estimating normalizing constants: IS, NCE and beyond” or “Provable statistical benefits of annealing for estimating normalizing constants”. Hopefully this clarifies that we separate the estimation error of AIS from the uncertainty in the sampling procedure which is “oracled away”.
Thank you for the suggestions. Given that the paper operates within the NCE framework, I believe having NCE in the title would certainly be more descriptive.
---
Reply to Comment 1.1.1:
Title: Answer to Reviewer EpxH
Comment: We thank the reviewer for their engagement, as well as updating their score and the discussion points on how AIS is analyzed from an SMC / L-Kernel perspective! | Summary: In this work, the authors make a number of contributions to the area of estimating normalization factors using annealing from some "proposal" $p_0$ to a "target" $p_1$.
The authors start off by nicely extending recent works by Chehab et al. (2023) on the relation between importance sampling (IS) and noise-constrastive estimation (NCE) to the scenario involving annealing, introducing the notion of *annealed Bregman estimators (ABE)* in the proces, computing hte intermediate log-ratios by solving a classificiation task between samples drawn from neighboring densities.
Making in particular use of the ABE defined by the NCE loss, the authors then obtain a number of asymptotic results (in the number of temperatures $K \to \infty$) for the mean-squared error of the estimator of $log Z_1$, i.e. the normalization constant of the target $p_1$.
In Theorem 3, they show that in the case of the commonly used geometric path $p_t = p_0^{1 - \beta} p_1^{\beta}$, both annealed IS and annealed NCE result in an MSE estimate that polynomial in the parameter distance between $p_0$ and $p_1$. This result is established under assumptions of the exponential family for both the proposal and the target, which is a more general setting than seen in previous works. Moreover, the authors also show in Theorem 2, under almost identical assumptions to Theorem 3, that even with the NCE loss instead of IS, the MSE will still scales exponentially in the distance between the parameters of the proposal and target, as is a well-known fact for the IS-based estimator.
Combining these two results, the authors have demonstrated theoretically that *annealing* allows us to move from exponential to polynomial dependence on the parameter distance, effectively bridging the gap between many empirical results from the literature where annealing has been observed to be of much help even in higher dimensions vs. standard importance sampling.
Furthermore, in Theorem 4, 5, and 6, the authors show that the MSE when using the recently introduced arithmetic path $p_t = (1 - \beta) p_0 + \beta p_1$, has exponential, polynomial, and constant dependence on the parameter distance, respectively, under different path-parameterizations. Of course, this path requires the normalization constant in its computation, but the authors use the observation to propose a two-step estimation procedure which they then demonstrate to be useful on a simple toy-problem.
Strengths: I believe this work to be of excellent standard.
The authors are very nicely combining the previous works of Gelman & Meng (1998) and Chehab et al. (2023) to provide both additional theoretical and empirical insights. The theoretical results are, as far as I can tell, both very novel and very relevant. In particular, I find Theorems 2 & 3 to be very pleasing as they give us a clear theoretical motivation for the usage of annealing in higher dimensions, which is a result long sought after. In addition to this, they also bring the work of Chehab et al. (2023) into the annealing setting, and demonstrates its utility both theoretically and empirically (though on toy-problems).
The presentation is of very high quality; the text is well-written, the results are presented in a clear and nice manner, illustrations are used to get the intuition across, and tables are used when appropriate to give the reader a quick overview of results.
As far as I can tell, the authors also do a good job mentioning previous works and the limitations of their own work.
Weaknesses: There are honestly very few weaknesses to point out when it comes to the paper itself; the only weaknesses I can find are related to limitations of the analysis and the resulting proposed practical methods, which I will leave to the "Limitations" section. The one aspect I'd like to raise, though this is slightly related to the aforementioned limitations, is that I would have liked to see slightly more extensive empirical results for the proposed methods. I do agree with the authors in that the empirical results are not the main focus of the work, but still, have a slightly more complex problem would have been nice to see how, for example, the two-step (trig) approach would do in this.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - Theorem 6: In Gelman & Meng (1998) in Section 4.3, they make the following comment "we doubt (51) is achievable as it is bounded above by $\pi^2 / n$ even if $p_0$ and $p_1$ are infinitely apart" referring to the continuous-time limit.
This is then slightly sharper than the $2 \pi^2 / N$ result presented in this work. Can the authors comment on this? Was this just a slightly overly confident statement by Gelman & Meng, i.e. they really meant including the factor of 2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The main limitations of the theoretical analysis are of course the assumptions.
The authors assume that both the proposal and the target are in the exponential family, in addition varying assumptions on the normalization constant of the target as a function of the parameters. This is often not the case in practice, but, as mentioned by the authors, the exponential family some universality properties, which one could argue makes this slightly less of an issue.
As most other works on annealing, this work also assumes perfect sampling from the intermediate distributions, which is usually not the case in practice. But, as mentioned, this is a very common assumption to make. This is also mentioned by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback! We also thank the reviewer for spotting the difference between the $2 \pi^2 / N$ in eq. 17 in our paper and the $\pi^2 / N$ from [1] which is correct. This is due to an unfortunate typo: in the supplementary material of our paper, eq 112 defines $\alpha_H = \pi / 4$, hence in eq 117 we should have $\alpha_H^2 = \pi^2 / 16$ which recovers the result of [1].
[1] Gelman et al. “Simulating normalizing constants: From importance sampling to bridge sampling to path sampling”. Statistical Science, 1998. | Rebuttal 1:
Rebuttal: We thank all four reviewers for their feedback, and for suggesting references which we plan to include in a camera-ready version to give further context to our results. In this general reply, we further clarify the relevance of our results and we address specific concerns in detail in the individual replies.
**Relevance of our results —** annealed importance sampling is a seminal method in statistics and while it is known that the choice of annealing path can greatly impact the estimation error [1], theoretical guarantees on commonly used paths have been elusive. Some prior works have relied on heuristic arguments [2, 3], strong assumptions like gaussianity [3] or essentially log-concavity [2] along the path, or a factorial target [4] to study how the estimation error scales with the dimensionality for a certain choice of path. But in a general setting, no such results exist to our knowledge. **We believe our results are the first to quantify how the geometric path — which has been the standard for more than a decade — actually impacts the estimation error in terms of the dimensionality of the problem**. Our results are also **the first theoretical analysis of the estimation error from an arithmetic path since it was introduced [5]**. We believe these results are relevant to the NeurIPS community, especially as annealing paths are becoming a central component of many recent advances in machine learning [6, 7].
[1] Gelman et al. “Simulating normalizing constants: From importance sampling to bridge sampling to path sampling”. Statistical Science, 1998.
[2] Dai et al. "An invitation to sequential Monte Carlo samplers." Journal of the American Statistical Association, 2002.
[3] Neal. “Annealed importance sampling”. Statistics and computing, 2001.
[4] Beskos et al. “On the stability of sequential Monte Carlo methods in high dimensions’. The Annals of Applied Probability, 2014.
[5] Masrani et al. “q-Paths Generalizing the Geometric Annealing Path using Power Means”. UAI, 2021.
[6] Rhodes et al. “Telescoping Density-Ratio Estimation”. NeurIPS, 2020.
[7] Song et al. “Score-Based Generative Modeling through Stochastic Differential Equations”. ICLR, 2021. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
On the Identifiability and Interpretability of Gaussian Process Models | Accept (poster) | Summary: The authors carefully examine additive mixture of Matern kernels for Gaussian processes. For the single-output case, the authors show that a mixture of Matern kernels is equivalent to the least smooth component of the mixture (Theorem 2). Consequently, "there is no advantage in including any other component apart from the least smooth component" (Corollary 1). For the multi-output case, the authors leverage results from Bachoc et al. (2022) to identify microergodic parameters of separable multi-output kernels. These theoretical results are accompanied by demonstrations on simulated data. In addition, in three experiments on real-world data, the authors provide evidence that a mixture Matern kernels performs as well as just the least smooth components.
Strengths: I would like to thank the authors for their submission. I enjoyed reading the submission and certainly learned few new things.
## Strengths
* The paper is very well written and overall the presentation is good. I found hardly any typos.
* Additive mixtures of kernels are very important in GP modelling, since it allows the practitioner to model multiple length scales in the data at various levels of smoothness, so a careful assessment of additive mixtures like this work does is important.
* The main result of the paper, Theorem 2, is very interesting and might have important implications. I've read through the proof, and, modulo my comment below in the weaknesses, I think that the proof is otherwise likely alright.
* The experiments appear to nicely support the central conclusion of the paper: "the inclusion of kernels with different smoothness does not necessarily include prediction accuracy" (Section 7). (However, I have doubts about this. Please see below.)
Weaknesses: ## Weaknesses in the Method and Theory
### Claim of Theorem 1 Not Fully Supported by the Proof, but a Fix Appears Possible
The claim of Theorem 1 consists of two parts: (1) $K$ is $d$-times MSD, and (2) "so the smoothness of $K$ is determined by the least smooth component". For (2) to follow from (1), it is also necessary that $K$ is _not_ smoother than $d$-times MSD! That is, it is necessary that $K$ is _not_ $(d + 1)$-times MSD.
For functions, consider the following example. Suppose that $f_1$ is differentiable and that $f_2$ is non-differentiable. Is $f_1 + f_2$ then non-differentiable, so the smoothness of the sum is determined by the least smooth component? The answer is no: if $f_1 - f_2$ is non-differentiable, then a counter-example is found with $g_1 = f_1$ and $g_2 = -f_2$; and if $f_1 - f_2$ is differentiable, then a counter-example is found with $g_1 = f_1 - f_2$ and $g_2 = f_2$. Therefore, for sums of functions, the smoothness is _not necessarily_ determined by the least smooth component, because the sum can be smoother than the least smooth component! Although the claim is false for functions, kernels have very different properties, and I think the claim is true in that case.
Hence, to fully support the claim in Theorem 1, it is necessary that the author prove that $K$ is not $(d + 1)$-times MSD, which is currently not shown in the proof. Fortunately, I think that the proof is easily extended by making use of the following property: if $f_1$ is integrable and $f_2$ is non-integrable, then, by $|f_1 + f_2| >= |f_2| - |f_1|$, $f_1 + f_2$ is also non-integrable.
### Question About Proof of Theorem 2
In the proof of Theorem 2, the fourth inequality on line 25 of the appendix seems to use the following property:
$ \frac{f(\omega) + o(f(\omega))}{g(\omega) + o(g(\omega))} = \frac{f(\omega)}{g(\omega)}(1 + O(\|\omega\|^{-2})) $
where $f(\omega)$ and $g(\omega)$ are the first terms of the particular mixtures of $p_l$ and $\tilde p_l$. Now,
$ \left| \frac{(f(\omega) + o(f(\omega)))/(g(\omega) + o(g(\omega)))}{f(\omega)/g(\omega)} - 1\right| = \left|\frac{1 + o(1)}{1 + o(1)}\right| = |o(1)|$,
so
$ \frac{f(\omega) + o(f(\omega))}{g(\omega) + o(g(\omega))} = \frac{f(\omega)}{g(\omega)}(1 + o(1)) $,
which is a weaker result! If you're a little more careful, instead of $o(1)$ you might get a $O(\|\omega\|^{-q})$ where $q$ depends on the ratio of the leading term and the second term in the particular mixtures of $p_l$ and $\tilde p_l$. Importantly, $q$ might be very small, smaller than $2$. It's not at all obvious to me that $q=2$ is true! This may seem like a minor detail, but it could mean that the integrand actually depends on $O(\|\omega\|^{-2q})$ for $q$ potentially very small, which would affect the whether the integrand is integrable or not.
I would like the authors to reply to this question. Since Theorem 2 is the central result of the paper, it is important that the proof is bulletproof.
### I Disagree With Corollary 1
Corollary 1 states that "there is no advantage in including any other components apart from the least smooth component".
I strongly disagree with this conclusion, because I think that it misinterprets the concept of equivalence.
If two GPs are equivalent, then they produce _asymptotically_ similar predictions in the limit of infinite data under fixed-domain asymptotics.
_However_, for finite data, equivalent GPs may give different predictions, and this difference may be practically very significant!
Therefore, to conclude that "there is no advantage in including any other components apart from the least smooth component", in my opinion, is wrong.
Let me give a simple counter-example.
Let $k_1$ be a Matern-$1/2$ kernel with length scale $1$ and let $k_2$ a Matern-$3/2$ kernel with length scale $1000$.
Consider the mixture $k = k_1 + k_2$, and suppose that data is generated from $k$.
Suppose that we have $3$ observations randomly sampled from $[0, 2000]$.
If we make predictions with just the least smooth component, $k_1$, then, because of the short length scale of $k_1$, the predictive mean is basically zero on all of $[0, 2000]$ except near the observations.
On the other hand, if we make predictions with the mixture $k$, then, because of the long length scale of $k_2$, the predictions are much more reasonable.
(You can concoct a similar scenario by considering a Matern-$1/2$ kernel with length scale $1$ and a Matern-$v$ kernel for very high $v$ (so basically a squared-exponential kernel) also with length scale $1$. For finite data, the difference in smoothness will give very different predictive uncertainty.)
### Attribution of Theorem 3
In the main body, the authors present Theorem 3 as if it is a completely novel result.
However, Theorem 3.(i) is basically Theorem 3 by Bachoc et al. (2022), and Theorem (3).(ii) follows nearly immediately from Theorem 2 by Bachoc et al. (2022).
The authors acknowledge this in the proof in the supplement, but they should really state this attribution in the main body.
Because of the similarity to the theorems by Bachoc et al., I consider Theorem 3 by the authors only a minor contribution.
## Weaknesses in Experiments
* In Simulation 1, the authors state that the "it is evident that the mixture kernel demonstrates a degree of smoothnes similar to the Matern kernels with $\nu = 1/2$". Although I think that the result is true, I think that this is not necessarily so obvious from Figure 1. To really determine the smoothness of the samples of the mixture kernel, the authors should provide more conclusive evidence, e.g. by numerically determining whether the function is differentiable: numerically compute $|f(x) - f(x + h)|/|h|^{\alpha}$ for $h \to 0$ and _e.g._ $\alpha = 1$ (for once differentiability).
* In Simulation 2, the authors demonstrate that $w_1 \sigma_1^2 \alpha_1$ is identifiable and they claim that _only_ this parameter is identifiable. To really give evidence that only $w_1 \sigma_1^2 \alpha_1$ is identifiable, the authors should show the parameters that would be microergodic if you would consider the kernels separately: $w_2 \sigma_2^2 \alpha_2$ and $w_3 \sigma_3^2 \alpha_3$. These parameters are also not shown in Figure S1. Without showing these additional parameters, I'm not convinced that the figure really prove that only $w_1 \sigma_1^2 \alpha_1$ is identifiable.
* Throughout the real-world experiments, the authors use Adam/SGD. I wonder why the authors wouldn't just run L-BFGS-B until convergence at a reasonable tolerance. This would likely eliminate some difficulties in optimising the mixture kernel. In addition, the supplement says that the kernel matrices are _heavily_ regularised by a diagonal of magnitude $0.01$ or $0.1$. I think that is _much more_ than should be necessary if one uses double precision. I'm not sure if it would change any results, but it does make me suspicious.
* In Application 1, the authors compare a Matern mixture kernel against a Matern-1/2 kernel to perform texture extrapolation. Although I don't disagree with the result that the mixture Matern kernel performs similar to the Matern-1/2 kernel, I think both the mixture kernel and the Matern-1/2 kernel are a very poor fit for the data, which can be seen from the prediction in Figure 4. The point of these textures is that extrapolation is possible because they are periodic, so one should really use a (weakly) periodic kernel. I understand that Theorem 2 does not apply to mixtures of periodic Matern kernels, just to regular Matern kernels, but to me that suggests that this is not a good data set to demonstrate Theorem 2 on.
### Experiments Do Not Strongly Support Main Claim
My main issue with the experimental section is that the authors take some data sets, compare a mixture of Matern kernels versus a Matern-1/2 kernel, observe that the performance of the two is very similar, and conclude that, generally, there might be no advantage in adding additional components to the mixture. In my opinion, this is not the right conclusion. The right conclusion is that either (a) generally, there might be no advantage in adding additional components to the mixture; or (b) specifically in the chosen experiments, there is no advantage in adding additional components to the mixture.
_To really conclude (a), the authors should compare the mixture kernel to the Matern-1/2 kernel on a data set where conventially one would agree that a Matern mixture kernel would be a good choice._ An example of such a data set is a time series that has a noisy short-length-scale components and a smooth long-length-scale component. In constrast, economical data sets such as prices are traditionally modelled with an Ornstein-Uhlenbeck process (Matern-1/2 kernel), so I'm not surprised additional smooth kernels won't help there. Moreover, if you visualise the gene expression data, you will see that the data is _very_ erratic and definitely doesn't contain and smooth components. Therefore, I am not at all convinced that the authors have made a good case for (a).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: ## Conclusion
I think that this is great submission with very interesting results. However, in my opinion the authors have not provided sufficient evidence for the central message of the paper, which is that the inclusion of kernels with different length scales does not improve prediction accuracy. (Please correct me if this is not the central message of the paper.) The authors have not provided sufficient evidence because of the following two reasons:
* I believe that the conclusion in Corollary 1 is not justified. Please see the reasoning above.
* I believe that the data sets chosen in the experiments do not contain any smooth components, so obviously modelling smooth components in the data will not get you any improvement in predictive accuracy. Please see the foregoing section. Therefore, I think that the experimental results do not support the central message of the paper.
In addition, the proof of Theorem 2 might have an error (please point out whether I'm right or wrong about this!), Simulations 1 and 2 (please see above) have flaws, and I think that Theorem 3 is only a minor contribution. If I add all this up, I'm afraid that I must recommend rejection at this point.
_However_, I'm very willing to debate the above arguments and revise my score if necessary. Therefore, authors, please carefully argue the above, because I would love to be wrong and change my recommendation to accept.
EDIT
In the discussion below, the authors have outlined changes that address the above shortcomings. These changes are substantial. Since uploading revisions is not possible, normally I would recommend that the revised PDF go through another round of review. However, I feel that the authors and I are now aligned sufficiently well, so this time I am willing to recommend a "borderline accept".
I should emphasise that I believe that this submission likely deserves a better score than "borderline accept", but I am not willing to recommend any higher score before reviewing the revised PDF.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We sincerely thank the reviewer for the meticulous and insightful examination of our work. Your comprehensive feedback has been instrumental in enhancing the quality and clarity of our paper. In response, we have addressed each of the concerns raised and revised the manuscript accordingly. While we are unable to upload the revised manuscript at this juncture, please find below our point-by-point response to your comments. Additional figures that further elucidate our points are provided in the attached 1-page PDF for rebuttal. We deeply appreciate the time and effort you've invested in guiding our revisions.
**About Theorem 1**
A: Thank you for your keen observation. It's pivotal to also ensure that the kernel doesn't exceed this smoothness level. Your suggestion aligns with the proof's direction: since $\rho_1(\omega)\omega^{2d+2}$ is non-integrable, it implies that $\rho(\omega)\omega^{2d+2}$ is similarly non-integrable.
Your example effectively distinguishes the differentiability of a deterministic function and mean-squared differentiability of random functions. We value your comprehensive feedback and have provided a clear proof of Theorem 1 in our revised paper.
**About Theorem 2**
A: You've identified a crucial detail. We omitted a key assumption about smoothness: $\nu_{l+1}-\nu_l\geq 1$. This assumption is reasonable since if two consecutive $\nu$’s are too close, then the corresponding kernel components $K_1$ and $K_2$ admit the same smoothness, negating the need for a mixed kernel (see Theorem 4 in the supplement).
As detailed in line 23 in the supplement, the order of the leading term $p_1(w)$ is $2\gamma_1$, while the order of all other terms, $2\gamma_2,\cdots,2\gamma_L$, are all smaller than $2\gamma_1$ by at least 2. As a result, if we divide both the numerator and the denominator by $\|\omega\|^{2\gamma_1}$, the denominator becomes
$\sum_{l}{w_l}{\sigma}_l^2{\alpha}_l^{2\nu_l}{p}_l(\omega)/\|\omega\|^{2\gamma_1}=w_1\sigma_1^2\alpha_1^{2\nu_1}+O(\|\omega\|^{-2})$.
Applying the same trick to the numerator, the ratio simplifies to $\frac{\widetilde{w}_1\widetilde{\sigma}_1^2\widetilde{\alpha}_1^{2\nu_1}}{{w_1}{\sigma}_1^2{\alpha}_1^{2\nu_1}}(1+O(\|\omega\|^{-2}))$.
**About Corollary 1**
A: We agree with your perspective on the significant difference between a model's asymptotic behavior and its finite sample performance. Corollary 1 emphasized asymptotic equivalence, possibly downplaying the importance of finite sample scenarios. Our revision clarifies this.
**About Theorem 3**
A: We recognize the need for proper attribution. While Theorem 3.(i) is indeed a direct corollary of the result by Bachoc et al., Theorem 3.(ii) is a distinct extension requiring nuanced verification. We have revised the presentation of Theorem 3 to explicitly mention its relationship to Bachoc et al.
**About Simulation 1**
A: Thank you for the insight. In our context, the differentiability of GP is in the mean-square sense (Definition 3). The numerical difference of the sample path doesn’t inherently imply mean-square differentiability. Our method (Figure 6 in the 1-page PDF) appears novel and involves the following steps:
For a given fixed $x_0=0$, let $x_i=1/i$ for $i=1,2,…$, and we generate $y_i^l$ from the GP, where $l=1,…,T$ denotes the index of replicates. This allows us to approximate $\lim_{x\to 0}\mathbb{E}(f(x)-f(0))^2$ with $\beta_i=\frac{1}{T}\sum_{l=1}^T (y_i^l-y_0^l)^2$. As per Definition 3, the GP is continuous if and only if $\beta_i\to0$.
Similarly, we can approximate $\lim_{x\to 0}\mathbb{E}\left(\frac{f(x)-f(0)}{x}\right)^2$ with $\gamma_i=\frac{1}{T}\sum_{l=1}^T \left(\frac{y_i^l-y_0^l}{x_i}\right)^2$. The GP is mean-square differentiable if and only if $\lim_{i\to\infty}\gamma_i $ exists.
Specifically, for the mixture of Matérn 1/2 and 3/2 (the first column) and Matérn 1/2 (the second column), while $\beta_i\to0$, $\gamma_i$ does not converge. However, for Matérn $3/2$ (the third column), $\beta_i\to0$ and $\gamma_i$ converges,. This empirical evidence bolsters the claims made in Theorem 1.
**About Simulation 2**
A: Figure 7 in the 1-page PDF now includes two panels showing inconsistencies of $w_2\sigma^2_2\alpha_2^{2\nu_2}$ and $w_3\sigma^2_3\alpha_3^{2\nu_3}$.
**About L-BFGS**
A: We appreciate the reviewer's keen observation regarding the use of the L-BFGS optimizer in our real-world experiments. Following the GPyTorch's guideline, our initial approach employed the Adam and SGD optimizers. However, upon recommendation, we conducted the analysis again using the L-BFGS optimizer. This adjustment enhanced our optimization process, enabling us to diminish the diagonal magnitude to $1e-04$ (Figure 7 in the 1-page PDF was obtained by L-BFGS). It reaffirms our manuscript's conclusion that only the term $w_1\sigma^2_1\alpha_1^{2\nu_1}$ is identifiable.
**About Application 1**
A: We have conducted a new experiment utilizing an MNIST image (Figure 4 in the 1-page PDF) due to its absence of periodic patterns. The results demonstrate pleasing prediction accuracy for Matérn kernels, thereby reinforcing our understanding of their applicability and effectiveness in this particular context.
**About experiments do not strongly support main claim.**
A: We are in full agreement with the suggestion that we should include a dataset with a smooth component. Also suggested by reviewer dNVU, we conducted a new analysis on the Mauna Loa $CO_2$ dataset (Figure 3 in the 1-page PDF). In this context, the Matérn $3/2$ appears to exhibit better prediction accuracy compare to Matérn $1/2$. However, we observe similar performance between the Matérn mixture $1/2+3/2+5/2$ and Matérn $1/2$, as well as the resemblance between Matérn mixture $3/2+5/2$ and Matérn $3/2$. This observation further substantiates our argument that the performance of a Matérn mixture kernel is primarily dictated by the least smooth Matérn kernel within it.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply to my review. I appreciate the effort that very clearly went into writing the rebuttal.
I am afraid that I continue to disagree with the main message of the paper. Specifically, I continue to disagree with Corollary 1, which sets the tone of the paper, namely that there is "there is no advantage in including any other components apart from the least smooth component". Whereas this statement is a very particular statement about asymptotics, in the non-asymptotic setting, such as practical applications, in my opinion, there is most definitely benefit in including more components than the least smooth component. Please see the silly but relevant example in my review.
(In addition, as a minor point, theoretical results that show that equivalent kernels cannot be distinguished, as far as I know, rely on fixed-domain asymptotics. What about increasing domain asymptotics, which is a very reasonable setting?)
For the experiments, I would again like to emphasise the following point:
> _To really conclude (a), the authors should compare the mixture kernel to the Matern-1/2 kernel on a data set where conventially one would agree that a Matern mixture kernel would be a good choice._ An example of such a data set is a time series that has a noisy short-length-scale components and a smooth long-length-scale component.
I appreciate that you've added in an experiment on the Mauna Loa data. However, the point is not that you should have added a data set where the data is smooth, the point is that you should have added in a data set where conventionally a Matern mixture kernel would be a good choice, and a time series data set with a noisy short-length-scale component and a smooth long-length-scale component is a good choice.
For just year 2022 of the Mauna Loa data, this doesn't hold, because there is roughly only one length scale in the data: the sinusoidal like yearly pattern.
I would be interested to see the same comparison on 50 years of Mauna Loa data, with the $(\nu=\frac12)$-component initialised to 4 months and the $(\nu=\frac32)$-component initialised to 10 years. In this setting, compare extrapolation on the next 5 years between a $(\nu=\frac12)$-kernel and a mixture of a $(\nu=\frac12)$-kernel and a $(\nu=\frac32)$-kernel. I'm fairly convinced that the additional smooth component would yield substantially better extrapolations due to its ability to model the smooth long-length-scale component.
Although the paper is very well written and the technical contributions are impressive, I unfortunately still think the paper's main message is not sufficiently supported by the theoretical results and experiment.
---
Reply to Comment 1.1.1:
Title: Clarifications and Further Experiments in Response to Reviewer j39y's Feedback
Comment: Thank you for your swift and insightful response. In line with your suggestions, we've conducted additional experiments and addressed the areas of concern you highlighted. Given the platform constraints, we've presented numerical results (MSEs) in this discussion. However, should it aid in your evaluation, we are more than willing to upload the associated figures and code. Your thorough feedback has been invaluable in enhancing the rigor of our work.
**About Corollary 1**
A: We deeply value your insights and wholeheartedly concur with your perspective on Corollary 1. The nuance was initially captured in our detailed rebuttal; however, character limitations led to unintended truncations. You've aptly highlighted our asymptotic focus. To better reflect this nuance, we've revised Corollary 1 with a follow-up note:
Corollary 1: The mixing kernel $K$ is equivalent to $K_1$.
Note: In an asymptotic context, particularly with a large sample size, there's limited benefit in incorporating smoother components beyond the least smooth one. Nonetheless, for smaller sample sizes, including additional components can offer advantages depending on the specific scenario.
**About fixed domain**
A: You've pinpointed a crucial distinction. Indeed, our investigation was anchored in the context of fixed-domain scenarios. The realm of increasing domain (or forecasting) presents a more intricate landscape and has seen relatively less exploration compared to its fixed counterpart. It's imperative to clarify that all assertions, simulations, and empirical data examples we presented adhere to this fixed domain paradigm. That said, we acknowledge the undeniable significance of increasing domain scenarios. This very topic is on our research horizon, and we aim to delve deeper into it in subsequent works. We have incorporated this limitation in our discussion section in the revised manuscript.
**About experiment**
A: We appreciate your suggestion and regret any misunderstanding in our initial response. Acting upon your recommendation, we undertook the following experiment: We employed Matern 1/2, Matern 3/2, and a mixture of Matern 1/2 and 3/2 on the Mauna Loa dataset spanning from 1972 to 1981, forecasting data from 1982-1987. To provide a more discerning contrast between interpolation and extrapolation, we conducted a random split of the data from 1972 to 1987 and used one half to predict the other 50\%. Over an extended timespan of 15 years, Mauna Loa dataset displays both a noisy short-length-scale component and a smooth long-length-scale component, precisely as you highlighted. Consequently, this dataset serves as an example "where conventionally a Matern mixture kernel would be a good choice. The MSEs are summarized in the following table:
Matern 1/2 Matern 3/2 Mixture of 1/2 and 3/2
Extrapolation 6199.689 89770.53 1360.280
Interpolation 0.6404151 41.05647 1.242232
These results lend weight to your assertion that the inclusion of a smoother component is beneficial for extrapolation. Concurrently, it solidifies our stance on the veracity of fixed-domain predictions and accentuates the nuances distinguishing fixed from increasing domain scenarios.
To emphasize, all claims within our manuscript pertain to the fixed domain (interpolation) in the asymptotic framework. That said, we concur with you on the significance of exploring the increasing domain (extrapolation or forecasting) and the finite sample setting in future endeavors.
We trust this addresses any reservations you may have had regarding Theorem 2 and Corollary 1. Moreover, we'd like to underline that our work extends beyond just these elements. For instance, Theorem 1, along with the illustrative experiment presented in Figure 6 of the 1-page PDF, both stand as novel contributions to the field. This visualization compellingly demonstrates mean-square continuity and differentiability. We'd like to particularly acknowledge and thank you for suggesting this experiment; it greatly enriches the depth and clarity of our work. Meanwhile, Theorem 3 focuses on multi-output (multivariate, multitask) GPs, endorsing the utilization of the multiplicative (separable) kernel—a prevalent approach in contemporary machine learning research.
While our paper encompasses multiple facets and findings, we trust that our clarifications and the highlighted novelty of our contributions shed more light on the value of our work. We're deeply grateful for your keen insights and constructive feedback, which have significantly enriched our study. We are eager to address any further questions you might have and are open to conducting additional experiments during this discussion period. We hope that our continued dialogue and these enhancements can be viewed favorably in your evaluation.
---
Reply to Comment 1.1.2:
Title: Clarifications and Further Experiments in Response to Reviewer j39y's Feedback (Part 2)
Comment: Given character limitations on OpenReview, we have expanded upon our previous discussion by providing additional experiments and clarifications in this separate comment.
Building on the idea of examining the Mauna Loa dataset, which displays both a noisy short-length-scale component and a smooth long-length-scale component, we simulated a dataset echoing these characteristics. Specifically, we produced two time series: one originating from Matern 1/2 with length-scale=0.5 and the other from Matern 3/2 with length-scale=3. Summing these together, the resultant series aptly exhibits both a turbulent short-length-scale component and a placid long-length-scale one. We then employed Matern 1/2, Matern 3/2, and a mixture of both for extrapolation and interpolation testing. In the extrapolation phase, we forecasted the period from 11-20 based on training data from 1-10. Meanwhile, for interpolation, we randomly divided the data, utilizing one half to predict the other. The subsequent table encapsulates the MSEs:
Matern 1/2 Matern 3/2 Mixture of Matern 1/2 and 3/2
Extrapolation 346.2207 350.7294 346.2542
Interpolation 40.89175 61.12420 40.91430
In response to your recommendation — highlighting the necessity to incorporate a dataset where conventionally a Matern mixture kernel would be apt — we executed an additional experiment. For this, we synthesized data from a mixture of Matern 1/2 and Matern 3/2. In this context, the Matern mixture kernel essentially serves as the oracle kernel, reinforcing its credibility as a judicious selection. We carried out analogous tests, with the MSEs encapsulated in the table below:
Matern 1/2 Matern 3/2 Mixture of Matern 1/2 and 3/2
Extrapolation 142.1324 140.2512 142.1157
Interpolation 27.58031 37.13925 27.60812
Consistent with the outcomes from the Mauna Loa dataset, these two additional experimental results reiterate your insight: integrating a smoother component may enhance extrapolation performance. Concurrently, it solidifies our stance on the veracity of fixed-domain predictions and accentuates the nuances distinguishing fixed from increasing domain scenarios.
We trust this supplementary experiment elucidates any lingering ambiguities and positively informs your assessment. We remain receptive to further experimentation and keen to furnish any needed clarifications throughout this discussion period. Your continued feedback is deeply valued. | Summary: This paper intoduces a few results on identifiability of Gaussian Processes.
It is shown that a Gaussian process with a kernel that is a mixture of kernels, the smoothness is that
of the least smooth component.
It is shown that for two mixtures of Matern kernels, the induces processes are equivalent if certain
products involving only parameters of the least smooth kernels in the mixture coincide. Consequently,
higher smoothness components do not affect equivalence.
Finally, for a multi output GP with covariance of the form $A \cdot k$, where A is a fixed matrix and k is a scalar Matern kernel, the matrix A is identifiable.
Experiments that are intended to support the above results are included.
Strengths: The general subject of identifiability in GPs is important.
The non technical parts of the paper are well written and it is possible get a general idea of what the paper is about.
The experiment 5.2 is illustrative in showing that the prdouct can be identified while the individual components are not identified properly. (however, a much more detailed description of the experiment is needed).
Weaknesses: My main question is regarding the notion of equivalence used in this paper. It is not clear why this is
a useful notion. The questions are as follows:
* Clairty regarding identifiability 1. First, what is definition of the $\equiv$ symbol that is used for kernels in all the theorems? I couldn't find this definition for some reason. Where is it defined? Does it mean that the GPs induced by the corresponding kernels are equivalent in the sense of Definition 5?
* Clairty regarding identifiability 2: I'm assuming the answer to the question above is yes.
Then, why is this equivalence interesting and relevant to identifiability? Please explain why this notion is relevant.
Indeed, if my index set is finite, I can have two GPs on this set with covraiances C_1, C_2. As long as C_1,C_2 have full rank, the processes will be absolutely continuous w.r.t each other, but they are very different processes and C_1 and C_2 can be learned. How this related to the above notion of equivalence?
* In anycase, this notion of equivalence is by no means standard in the ML commmunity, and needs to be introduced and discussed in more detail.
Additional Questions:
* Relevance: While the general subject of identifiability is imporatnt, this is concerned with a very specific case of Matern linear mixtures. It is not clear this specific case has been used in applications before.
Some references are provided about the use of mixtures on lines 37-38, but how many of these use specifically *linear mixtures*, and *Matren mixtures*?
* Literature discussion: There is no Literature section in the paper. It is my understanding that there is previous work on identifibility, even for a single Matren kernel. This needs to be discussed in the paper, and comparison provided. Are there new mathematical ideas involved in the results in this paper compared to existing work?
* Inverse implication in results: The results state that if parameters satisfy something, then there is equivalence. Is the reverse direction true? Is it obvious? This again is related to the somewhat
* Experiments:
As mentioned earlier, Experimen 5.2 is illustartive. However, it is not specified how the optimization is made. It may also have happened that there is no reconstruction due to local maxima of the method, rather than general lack of identifiability.
* As for the other experiments: the experiments 5.1, 6.1, 6.2 do not illustrate anything that I would find informative. 6.2 is a 1d version of an already small multidimensional dataset. It is not clear how 6.1 is realted to ML use cases.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: Please see above.
My recommendation for the rating is temporary and I plan to revisit it following author's responses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: The subject of this works involves some abstract properties of Gaussain Processes. No specific societal impact discussion is required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to express our gratitude to the reviewer for your meticulous feedback and constructive comments. Enclosed within the 1-page PDF for additional figures, we address each point as follows.
**Weakness 1: definition of equivalence of measures**
A: Thanks for highlighting this. The symbol $\equiv$ denotes the equivalence of the Gaussian Processes induced by the two kernels (Definition 5). We've now defined it in the context of Theorem 2.
**Weakness 2: finite domain**
A: Thank you for your insightful comment. The equivalence of GPs in the context of our study is pivotal because if two kernels within the same parametric family produce equivalent GPs, then the parameters of those kernels become non-identifiable. This, in essence, underlines that one cannot uniquely determine the parameters given observed data, which is a vital understanding for model interpretation and estimation.
You're correct in noting the differences in the finite and infinite domains. When working in a finite domain, GPs are indeed just multivariate Gaussian distributions, and as you rightly pointed out, any two finite dimensional Gaussian measures are equivalent. However, we focused on infinite subset of Euclidean space (we've clarified this in the revision).
**Weakness 3: the notion of equivalence**
A: We've elaborated on equivalence post Definition 5 and emphasized challenges of parameter identifiability in infinite domains.
**Q1: about specific case of Matérn mixtures**
A: For additive methods, Duvenaud et al. (NeurIPS 2011) used a mixture of RBF; Duvenaud et al. (ICML 2013) applied a mixture of RBF, periodic (Per), linear (Lin), and rational quadratic (RQ); Kronberger and Kommenda (LNTCS, 2013) worked with RBF+RQ, RBF+Matérn, RBF+Per; Verma and Engelhardt (BMC Bioinformatics 2020) used RBF, Matérn 1/2, 3/2, 5/2. For more complex methods with additive and multiplication methods, Sun et al. (ICML 2018) used RBF, Per, Lin, and RQ; Lloyd et al. (AAAI 2014) employed RBF, Lin, Per, RQ, white noise, and constant.
We concur that our study focuses on the additive/multiplicative aspect of the Matérn kernel. We've incorporated this focus into the limitations section of our manuscript and we anticipate that this exploration will stimulate future research concerning identifiability and interpretability.
**Q2: literature discussion**
A: We've expanded our literature review in the introduction, highlighting the progress in the area of identifiability for single Matérn kernels. While these works provide foundational insights, our paper primarily fills the gap in literature concerning mixtures of Matérn kernels.
In terms of the mathematical intricacies, while we utilize established probability theorems on Gaussian random measures, the application and verification of these conditions for our research context presented non-trivial challenges, detailed further in our supplementary material. The introduction has now been updated to highlight this comparative discussion, which only underlines our contributions but also contextualizes our approach within the broader GP kernel identifiability landscape.
**Q3: inverse implication**
A: Your observation is sharp: The inverse implication is not immediately obvious. While we believe that the reverse direction is true based on our understanding and intuition, its proof poses substantial challenges. In prior work on single Matérn kernels, such as Zhang (JASA 2004), the key in the proof is that two Matérn kernels differing only in their microergodic parameter define the same correlogram. However, this does not hold for mixture kernels anymore. Thus, existing probability tools are not directly applicable. This open question is certainly intriguing and worthy of future exploration.
**Q4: optimization in experiment 5.2**
A: We understand your optimization concerns. Our extensive analysis (more experiment details can be found in the supplement), including L-BFGS (as also suggested by reviewer j39y), supports our main conclusions. We also included a loss curve for Simulation 2, see Figure 5 in the PDF, implying that our findings are not the result of local maxima but are indicative of the identifiability issue we are investigating. Another supporting evidence is the distinct behavior of the microergodic parameter. It uniquely converges to its true value, even when other parameters do not.
**Q5: other experiments**
The goal of 5.1 is to support that the smoothness of the mixture kernel is driven by the least smooth kernel. As also suggested by Reviewer j39y, we performed more experiments to support Theorem 1, see Figure 6 in the PDF.
For applications, we aim to replicate experiments performed in other analysis in ML literature. 6.1 comes from Figure 1(a) in Wilson et al. (NIPS 2014), Figure 4(a) in Remes et al., (NIPS 2017) and Figure 5(a) in Sun et al., (ICML 2018). 6.2 comes from Table 1-2 in Sun et al., (2018). To better claim our points, we've implemented the Matérn mixture kernel on MNIST and Mauna Loa $CO_2$. For MNIST, the Matérn kernel has demonstrated satisfactory prediction accuracy (Figure 4 in the PDF). Consistent with the results in our manuscript, the mixture kernel exhibits comparable performance to the Matérn 1/2 kernel, evidenced by a Pearson correlation coefficient of 0.99. $CO_2$, suggested by reviewer dNVU and employed in Wilson and Adams (ICML 2013) (Figure 3 in the PDF), shows analogous performance between the Matérn mixture 1/2+3/2+5/2 and Matérn 1/2, as well as the resemblance between Matérn mixture 3/2+5/2 and Matérn 3/2.
We agree that Matérn kernel may not apply ideally on some of the dataset. Our intention was to investigate the comparative performance of GP regression using a Matérn mixture kernel and its least smooth kernel. We understand the reviewer's concerns and are open to conducting further experiments to address any specific interests.
---
Rebuttal Comment 1.1:
Comment: This response has been slightly modified shortly after the original posting. Please refer to the current version.
I appreciate the response by the authors.
In this comment, I'd like to concentrate on the notion of equivalence that is used in this paper.
This is the central concern in my review, and I don't believe my questions in this regard in the review have been addressed in the response.
This paper, generally, is about showing sufficient conditions for two GPs to be equivalent.
The authors have agreed with me that when the index set of the GP is finite, the equivalence notion that is used in this paper is irrelevant and not useful, as all GPs on a finite set of fixed cardinality with strictly positive covariances are equivalent.
The question then is, why would an ML researcher be intrested in such definition of equivalence?
The only answer provided in the paper to this questions, and referenced in the response above, is this (lines 127-130 in the paper):
>As a consequence, two equivalent GPs can not be distinguished by any finite number of realizations (Stein, 1999). Specifically, given a family of GPs parametrized by θ, if Pθ1 ⇐⇒ Pθ2 with θ1 = θ2, then θ is not identifiable since we cannot distinguish between θ1 and θ2. As a corollary,there does not exist any consistent estimator for θ
To rephrase, equivalent processes can not be distinguished from a finite sample. However,
to my understanding, this only means that there is no decision rule that works **with probability 1**. This still leaves the possibility of designing decision rules that work with probability **arbitrarily close to 1**.
In particular, the statement
>As a corollary,there does not exist any consistent estimator for θ
is not clear. Can the authors provide precise reference (paper, or book chapter and page) for this statement?
Is it true that for any two members of an equivalence class, one can not distinguish between them with any probability larger than 0.5? What does this probability depend on?
Please clarify this point.
Provided that everythong above is correct, the notion of equivalence in this paper is simply too weak to be useful in ML. If the authors believe otherwise, please explain.
P.S.
As an additional note, the results in this paper are sufficent, but not necessary. But since equivalence is weak, sifficiency is easy to satisfy. The equivalence does not capture the finer structure of the process. This may explain the differences of opinion between claims in the paper regarding importance of smoothness, and contrary observations by other reviewers. However, I have not considered this point in depth so far.
---
Reply to Comment 1.1.1:
Title: Clarifications to further questions from Reviewer LXB2
Comment:
Thank you for the thoughtful and detailed feedback. We address each of your queries in sequence.
**About finite set**
We appreciate the opportunity to clarify the distinction between a "finite index set" and "finite samples". A finite index set means that the domain of the GP possesses a finite cardinality, while the weaker notion "finite samples" means that the sample size $n$ is finite.
It's worth noting when the domain is finite, employing a Gaussian process, or indeed any continuous process, might not be the most apt choice. However, situations characterized by finite samples are considerably more prevalent and realistic in the realm of ML. To illustrate, in our Application 1, the domain constitutes a square with infinite cardinality, and even though the number of pixels is finite, the overall context remains relevant for our discussion.
**On the clarity of the statement and precise reference**
The concepts of indistinguishability of equivalent GPs and parameter non-identifiability can be founded in foundational discussions available in Ibragimov and Rozanov (1978 Chapter III), Yadrenko (1983), Stein (1999 Chapter 4) and Zhang (2004). For a precise reference, we refer to Zhang's work in "Inconsistent estimation and asymptotically equal interpolations in model-based geostatistics," Journal of the American Statistical Association, 2004. Specifically, Page 252, left column, Line 4 addresses this concern. Zhang's discussion on the equivalence of measures aligns with our statement: "Moreover, if $\{P_\theta:\theta\in\Theta\}$ is a family of equivalent measures and $\hat{\theta}_n,n\geq 1$ is a sequence of estimators, then, irrespective of what is observed, $\theta_n$ cannot be weakly consistent estimator for all $\theta\in\Theta$."
**About distinguishing equivalent GPs with probability more than 0.5**
Thank you for this astute question. To the best of our knowledge, discussions have traditionally centered around the intriguing case of probability 1. We've taken the liberty to furnish a more streamlined proof, particularly addressing your query. Here's an outline of the argument to demonstrate that the scenario you've posited is untenable.
Given that $P_1\equiv P_2$, let $D$ be a decision rule, a function that maps a realization of the GP to a binary output, either 1 or 2. Here, an output of $D(f)=1$ means that we believe $f$ is generated from $P_1$, and similar for $D(f)=2$. Now, assume that $Pr_{f\sim P_1}(D(f)=1)=a>0.5$, that is, the probability to correctly detect the true data-generating model is greater than 0.5 (better than random guess). However, since $P_1\equiv P_2$, we have $Pr_{f\sim P_2}(D(f)=1)=Pr_{f\sim P_1}(D(f)=1)=a>0.5$, meaning that the probability of incorrectly identifying the true data-generating model also surpasses 0.5, which is a clear contradiction.
**Addressing the utility of the notion of equivalence in ML and its sufficiency**
A primary objective of our paper is to highlight that aside from $w_1\sigma_1^2\alpha_1^{2\nu_1}$, all individual parameters, $w_l$, $\sigma_l^2$, $\alpha_l$, for any $l=1,2,\cdots,L$, are non-identifiable. We respectfully offer a different perspective: to study the identifiability, this direction is precisely what we aimed for. Moreover, Gaussian random measures are either equivalent or orthogonal (Stein 1999, page 114). A corollary of our Theorem 2 states: if two GPs are orthogonal, then $w_1\sigma_1^2\alpha_1^{2\nu_1}\neq \widetilde{w}_1\widetilde{\sigma}_1^2\widetilde{\alpha}_1^{2\nu_1}$. Additionally, a significant conclusion of our research is, a mixture of Matern kernels is equivalent to its least smooth component. Even if there is a reason to use it for real data application, we recommend caution in parameter interpretation. For instance, $w_l$ does not necessarily represent the weight of the $l$-th component. We view this as a major contribution of our paper: "understanding identifiability and interpretability is so crucial in using GPs but rarely discussed" (Reviewer PY7Z).
**Regarding differing opinions on the importance of smoothness**
We seem to have achieved a consensus regarding the discrepancies in certain experiments. To succinctly express the underlying reason: the MSEs from equivalent measures are asymptotically equal (Stein 1993 Theorem 1). Meaning, a substantial sample size is essential to witness the convergence of the MSEs. For the finite sample regime, we furnished empirical evidence complemented by theoretical insights for the disparities in the MSEs between equivalent GPs. Crafting a comprehensive theory remains a formidable challenge and an open problem, even for simplistic, singular kernels. For a detailed understanding, we kindly direct you to our discussion with Reviewer j39y due to the character limit.
We deeply appreciate your insights. We are always open to continued discussions and the exploration of additional experiments. | Summary: This paper investigates additive and separable mixture kernels in the context of Gaussian process regression. Concretely, it tests the intuition behind the convex combination of Matern processes and identifies limitations in the interpretability of the resulting mixture kernel that might contradict the intuition of the modeler.
It further analyzes the identifiability of parameters of separable Kernels in multi-output GPs.
Strengths: - I believe this paper is a useful contribution to understanding when intuition about modeling decisions diverges from the actual impact of these decisions. It thus earns its place at a ML conference in that it strictly extends the pool of resources that should be considered when designing a GP model
- NeurIPS seems to me as the right venue to publish these results in that format, since it manages a good balance between mathematical rigor and intuition, which makes the text very approachable for more application-oriented researchers, as well.
- the overall presentation follows a very clear structure that is easy to follow. I appreciate that the authors do not obscure meaningful results behind unnecessarily complicated formulations.
Weaknesses: - The plots are aesthetically rather unpleasing and the presentation of the results would, in my opinion, profit quite significantly from a polishing of the figures.
- The authors focus solely on Matern processes, to the point where the notion of "kernel" is used interchangeably with "Matern-kernel". This limitation, however, is understandable in light of the page limit and is also addressed in the discussion section. (It does not affect my score.)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - I would like to understand the impact of the "smoothness-of-mixed-Matern-kernel-is-determined-by-its-least-smooth-component" argument better:
- Is it really only about smoothness? This seems rather obvious to me (and I am certain this has been known before). But Theorem 2 states that the kernel is equivalent, and hence makes a more general statement about the structure of the functions from this kernel's RKHS, than merely smoothness, right? As somebody who is not familiar with proofs about identifiability and equivalence of kernels: apart from smoothness of the resulting sample paths, does mixing in more Matern components not add *any* structure to a single (least-smooth) Matern component? I could imagine that other readers would also appreciate a clarifying statement that brings together the "same-smoothness" and "equivalence" arguments.
- What reinforces me in my endeavor to understand the impact of this statement better is also the fact that in Section 7, you write that "[...] the inclusion of kernels with different smoothness does not necessarily improve prediction accuracy".
On the other hand, Theorem 2 states that it does absolutely make no difference (not only "not *necessarily*"). Could you elaborate more on the impact of this statement and its strength?
- Suggestion: the paper includes a comparison between (i) a mixture of kernels with varying smoothness parameters and (ii) a kernel consisting only of the least-smooth component. It would be interesting to see the comparison to (iii) the other (two?) components, as well. I would expect these results would be different then. But *how* they would be different (by how much or - e.g. in the case of Figure 6 - how they differ structurally) would be quite interesting, measured by the expected effort of this addition.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations of the analysis presented in the text are adequately addressed in the discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We are deeply grateful to the reviewer for your thoughtful and thorough examination of our manuscript. Your observations and suggestions have been invaluable in refining the narrative and content of our paper. While we're unable to present the full revised manuscript at this time, please find our detailed responses to your comments below. Further clarifications, especially in relation to the figures, can be found in the accompanying 1-page PDF for rebuttal. Your feedback has been instrumental, and we genuinely appreciate the effort and expertise you have devoted to our work.
**Weakness 1: about the plots**
A: Thank you for the comment. We have meticulously revised and polished all the figures. Since we are not allowed to upload the revised manuscript, please see the additional figures provided in the one-page PDF for rebuttal.
**Weakness 2: about the use of "kernel" vs "Matérn kernel"**
A: Thank you for understanding the scope of our work within the constraints of the page limit. To ensure clarity and to avoid any possible confusion, we have revised the manuscript to consistently use "Matérn kernel" instead of the more generic term "kernel."
**Question 1.1: about smoothness**
A: Thank you for this insightful comment. While Theorem 1 primarily emphasizes the smoothness aspect, Theorem 2 delves into a deeper equivalence, which articulates that the Gaussian random measure associated with the mixture of Matérn kernels is fundamentally equivalent to that of the least smooth single Matérn. This means that beyond the superficial similarities in smoothness, the underlying structures of functions from the kernel's RKHS are also equivalent. In essence, incorporating additional Matérn components doesn't introduce new structural nuances beyond what the least-smooth component already encapsulates.
To aid understanding for our readers, we've appended a paragraph after Theorem 2 to bridge the connection between the smoothness result in Theorem 1 and the deeper equivalence presented in Theorem 2. Your feedback has illuminated an important aspect, and we believe this addition will offer more clarity to the readers.
**Question 1.2: about the conclusion in Section 7**
A: Thank you for highlighting this discrepancy. You are absolutely right that there seems to be a contrast between the theoretical implications of Theorem 2 and the conclusion in Section 7.
The fundamental insight from Theorem 2 is that, from a purely theoretical standpoint, the inclusion of additional Matérn components with varying smoothness levels does not bring about any change in the underlying structure or representation capabilities of the kernel. Hence, when our model assumptions hold, one shouldn't expect improvements in prediction accuracy merely by adding kernels of different smoothness.
However, real-world data and applications often exhibit nuances and complexities that might not align perfectly with our theoretical model assumptions. One potential reason for the discrepancy in performance between a single kernel and a mixture kernel in real data settings could be the increased parameter space in the mixture model, making parameter estimation more challenging. As a result, despite the potential for greater flexibility, the mixture kernel might not always result in improved prediction due to potential sub-optimal parameter estimations.
Another aspect to consider is the potential mismatch between our chosen model (GP with specific kernels) and the true underlying process of the real-world data. If the real-world function doesn't strictly adhere to a Gaussian Process or doesn't exactly reside in the presumed RKHS, discrepancies between theoretical predictions and empirical observations can emerge.
In summary, while Theorem 2 provides a strong theoretical foundation asserting the equivalence of a single least-smooth Matérn kernel and its mixture counterpart, real-world applications introduce complexities that can sometimes deviate from our theoretical insights. We have further elucidated this point in the revised manuscript to ensure that the relationship between theoretical findings and empirical observations is clear to our readers.
**Question 2: suggestion about additional experiments related to Figure 6**
A: We are grateful for the reviewer's insightful suggestion, emphasizing the potential to further elucidate our arguments. The proposed comparison between the additional components is not only intriguing but will also serve to provide more comprehensive insight into our research.
Acknowledging the feedback, and in alignment with remarks from reviewer j39y, we recognize that the Pollen dataset might not be the most optimal choice for demonstrating the nuances of the Matérn kernel. Taking this into account, we have transitioned to utilizing the Mauna Loa $CO_2$ time series data (Figure 3 in the 1-page PDF for rebuttal). We observe similar performance between Matérn mixture $1/2+3/2+5/2$ and Matérn $1/2$, as well as the resemblance between Matérn mixture $3/2+5/2$ and Matérn $3/2$. At the same time, while the performance of Matérn $3/2$ exceeds that of Matérn $1/2$, underscoring the dataset's inherent differentiability, it's noteworthy that the integration of the $3/2$ component to $1/2$ in a mixture kernel doesn't elevate its performance to match that of Matérn $3/2$. This observation supports our contention: the performance of a Matérn mixture kernel is chiefly dictated by its least smooth component.
---
Rebuttal Comment 1.1:
Comment: Many thanks for the response.
---
Reply to Comment 1.1.1:
Title: Further clarification and experiments for Reviewer dNVU's Suggestion
Comment:
We are grateful that you value our work from the first place. We'd like to update you on some recent enhancements to our manuscript.
In response to your suggestion, we have expanded our comparison to include results for other components, showcased in Figure 3 in the rebuttal 1-page PDF. Building on your insights regarding performance distinction, we were motivated to delve deeper. Taking cues from both Reviewer PY7Z—who highlighted the potential context of the Mauna Loa dataset, suggesting Matern 1/2 might not be the optimal choice—and Reviewer j39y—who raised intriguing questions about finite sample behavior—we embarked on further experiments. These were crafted to affirm our previous findings and introduce new perspectives.
To provide a concrete response, we applied the following six kernels to the Mauna Loa dataset, covering the years from 1960 to 2020: 1/2+3/2+5/2, 1/2+3/2 ,1/2, 3/2, 3/2+5/2, 5/2. By varying the training sample size from 10% to 90%, and performing 10 replications for each size, we gauged performance across different training sizes. The resultant RMSE (std) values for all kernels are detailed below:
% 1/2+3/2+5/2 1/2+3/2 1/2 3/2 3/2+5/2 5/2
10 2.19 (0.09) 2.2 (0.09) 2.55 (0.12) 2.28 (0.07) 2.26 (0.13) 2.28 (0.07)
20 1.96 (0.06) 1.97 (0.06) 2.38 (0.08) 2.48 (0.53) 2.04 (0.19) 2.22 (0.04)
30 1.74 (0.05) 1.74 (0.05) 2.03 (0.11) 2.20 (0.28) 1.57 (0.06) 2.19 (0.03)
40 1.49 (0.07) 1.50 (0.07) 1.61 (0.12) 1.35 (0.18) 1.90 (0.43) 1.94 (0.38)
50 1.25 (0.06) 1.25 (0.06) 1.28 (0.09) 0.97 (0.19) 1.41 (0.64) 1.56 (0.64)
60 1.01 (0.09) 1.01 (0.09) 0.98 (0.10) 0.69 (0.13) 0.96 (0.62) 1.16 (0.70)
70 0.85 (0.09) 0.85 (0.10) 0.82 (0.10) 0.52 (0.12) 0.52 (0.08) 0.81 (0.72)
80 0.70 (0.07) 0.70 (0.07) 0.68 (0.07) 0.40 (0.03) 0.40 (0.04) 1.08 (0.94)
90 0.57 (0.04) 0.57 (0.04) 0.55 (0.04) 0.35 (0.03) 0.35 (0.03) 1.42 (0.93)
Key observations emerged:
The mixture kernel 1/2+3/2+5/2 aligned closely with 1/2+3/2 across all experiments. Further, these three (1/2+3/2+5/2, 1/2+3/2, 1/2) converge when the sample size ratio reaches 50%. This empirical finding supports the theoretical equivalence of these three GPs as posited in Theorem 2. Similarly, the 3/2+5/2 kernel and the standalone 3/2 kernel converge from a 70% sample size, lending empirical weight to their theoretical equivalence. We added a corollary claiming that the MSE of the mixture kernel asymptotically matches the MSE of its least smooth component.
A noteworthy trend is the exceptional performance of the Matern 3/2 kernel across all experiments, echoing the reviewers' conjecture of the dataset admitting a smooth structure. Its influential performance, however, gets diluted when coupled with the less smooth component, Matern 1/2. Conversely, when mixed with the smoother Matern 5/2 component, the lesser performance of the latter is overshadowed by Matern 3/2. These behaviors lend empirical weight to our Theorem 2, suggesting that in the asymptotic sense, the mixture kernel is dominated by its least smooth component.
In the realm of finite samples, we observed intriguing nuances. Theoretically, while the combined kernels 1/2+3/2+5/2, 1/2+3/2, and the standalone 1/2 are deemed equivalent, and the 3/2+5/2 is akin to 3/2, their practical agreement points i.e., 50%, 70%, differed with limited samples. Although securing a conclusive theoretical support for the finite sample regime is challenging, we managed to provide some insights to interpret our findings. To do so, we have to delve further into the proof of Stein's Theorem 1 in Stat \& Prob Letter (1991), which guides us to Theorem 3.1 in Stein (AoS 1990). The proof suggests that the relative difference between the MSEs rests on the tail of the series in the second-last line on Page 854. This difference is influenced by both the sample size (denoted as 'N' in Stein 1990) and by $b_{jk}$ and $\mu_j$, as defined on the same page. For a fixed sample size, smaller values of $(b_{ij}+\mu_j\mu_k)^2$ result in a smaller relative difference between MSEs. By the definition of $b_{jk}$ and $\mu_j$, the more "different" the two kernels are, in other words, the more "significant" the additional smoother components become — the greater the difference between the MSEs will be.
In conclusion, we have delved deeply into the intricacies of all six kernels, as you've rightly pointed out. In the realm of infinite samples, we've provided robust theoretical backing. For the finite sample domain, despite the inherent challenges and ambiguities, our discourse offers both theoretical and empirical insights.
We are genuinely grateful for your meticulous comments. We anticipate that the newfound depth and clarity in our revised content align with the broader objectives of our paper and we hope these enhancements can be viewed favorably in your evaluation. | Summary: This work looks at idenfiability in the context of mixture kernels in GP regression. They look at additive mixtures of Matern kernels and multivariate separable kernels in the context of multivariate GPs (multiple outputs). They show theoretically and through simulation that ML-II learning cannot discover the hyperparameters in the mixture corresponding to the different components as they are not microergodic (Stein, 1999). On the contrary, only products of hyperparameters in specific combinations is identifiable. The simulation studies on synthetic data are interesting - through repeated experiments they show that ML-II does not converge to the true hyperparametes in a mixture kernel and only the microergodic parameters are discovered.
Strengths: A lot of the theory on identifiability has been scattered in literature like Zhang (2004) and Stein's textbook (1999) but I would argue that a succint and targetted presentation such as this one might benefit the community and hence can be significant.
Weaknesses: This work begs the question - what happens in the large data limit when n -> \infty, as the simulations and applications only address the finite data limit. My guess is that the parameters which are not microergodic remain unidentifiable but some comment is needed.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Theorem 2 'p' is introduced without defining. I've had to figure that it is the dimension.
- Perhaps tangential to this work but it would be interesting to reason about the identifiability of the noise variance when a mixture kernel is used.
- How does the additive GPs work where the constructed kernel is additive and considers all combinations of dimensions in the data fit with the identifiability insights of this work.
- About the dominance of the least smooth kernel component, how does one extend the insight to mixtures of different kernel types like Matern + RQ or periodic + Matern.
- Line 281 / 282 pls cite Lalchand et al, Generalised GPLVM with Stochastic Variational Inference, AISTATS 2022.
- Line 47/ 48 pls cite Simpson et al, Kernel Identification with Transformers, NeurIPS, 2021.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations:
The most obvious one - which the authors mention in conclusions is the extension of these insights to mixture of kernel families which are used very frequently to model different effects in univariate functions. eg. CO2 Mauna Loa kernel.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
Thank you for presenting us with such insightful questions and observations regarding our manuscript. Given the submission constraints, we're unable to provide an updated version of the full manuscript at this juncture. However, to facilitate a more tangible understanding of our clarifications and to provide visual support, we've attached a 1-page PDF with addditional figures. In the subsequent sections, we proceed with a point-to-point response to each of your comments.
**Weakness about large sample**
A: We appreciate this observation. Our theoretical framework is indeed designed for the infinite sample scenario. Consequently, as $n \to \infty$, the conclusion should remain consistent, wherein only the proposed microergodic parameter is identifiable, while others are not. We have clarified this in the manuscript and conducted additional simulations for $n=1000,3000$. The conclusion remains consistent (Figure 1 in the one-page PDF).
**Question 1 about $p$**
A: Thank you for pointing this out. Indeed, we defined all domains in this manuscript as $\Omega = \mathbb{R}^p$ in line 94. Recognizing the distance between this and Theorem 2, we have explicitly defined $p$ as the dimension of the domain within Theorem 2 in our revised manuscript.
**Question 2 about noise variance**
Thank you for raising this intriguing point. In the majority of existing literature, researchers often work under the assumption of zero noise (no nugget), primarily due to the complexities associated with noise inclusion (nowhere continuous). As of our current knowledge, the most recent investigations into this matter can be found in papers by Tang, Zhang, Banerjee (JRSSB 2022) and Loh and Sun (Bernoulli 2023). These studies specifically focus on a single Mat'ern kernel rather than a mixture kernel. To encapsulate their findings briefly, they concluded that the noise variance is indeed identifiable, and the presence of noise does not undermine the identifiability of the Mat'ern parameters. Extending these findings to our mixture kernel remains challenging but is a fruitful avenue for future research.
**Question 3 about adding up all dimensions**
A: This question delves deep into an intriguing facet of our research. We suppose you're referring to a kernel of the form $K(x,x') = \sum_{l=1}^p w_l \text{Matern}(x_l-x_l';\sigma^2_l,\alpha_l,\nu)$, where $x_l$ represents the $l$-th coordinate of $x$.
If so, regrettably, the primary tool we utilized for proving equivalence of measures isn't directly applicable here. This is primarily due to the non-fulfillment of the first condition in the integral test (Yadrenko 1983), which expects the spectral density to behave as $1/\omega^r$ for some positive $r$. Our conjecture, based on our current understanding, is as follows: Individual parameters may not be identifiable. However, the term $w_l\sigma_l^2\alpha_l^{2\nu}$ remains identifiable, for each $l=1,...,p$.
To underpin this conjecture, we employed distinct Mat'ern $1/2$ kernels for every dimension of a synthesized 2-dimensional dataset ($p=2$). Our observations indicate that both $w_1\sigma_1^2\alpha_1^{2\nu}$ and $w_2\sigma_2^2\alpha_2^{2\nu}$ are identifiable (Figure 2 in the one-page PDF).
One can interpret these findings as follows: When our domain is restricted to a 1-D line in 2-D, for instance where $x_1\neq 0$ and all other dimension are set to zero ($x_2=...=x_p=0$), the kernel morphs into a 1-D Matern supported solely on the $x_1$-axis. Based on established theory, $w_1\sigma_1^2\alpha_1^{2\nu}$ becomes identifiable.
This logic can be analogously applied to other dimensions. However, it's imperative to note that this isn't a rigorous proof given the absence of a conclusive integral test. Your question has certainly shed light on a promising direction for future research. We're grateful for this thought-provoking inquiry.
**Question 4 about RQ and periodic kernels**
A: Thank you for posing this insightful query. Notably, RQ and periodic kernels do not have analytic spectral densities, which are essential for a theoretical analysis of GPs. Extending our insights to such kernel mixtures would necessitate the application of more advanced mathematical techniques, particularly within probability theory. We recognize this as a promising area for future research.
**Question 5 and 6 about references**
Thank you for drawing our attention to these relevant works. We have now incorporated citations to both Lalchand et al. (AISTATS 2022) and Simpson et al. (NeurIPS 2021) in the appropriate sections of our revised manuscript.
**Limitation about Co2 Mauna Loa kernel**
A: Thank you for highlighting this pertinent point. The CO2 Mauna Loa dataset and its associated kernel combination serve as iconic examples in the Gaussian Processes literature, demonstrating the power of combining different kernels to encapsulate diverse features of a dataset. While our study emphasizes the theoretical underpinnings of mixture kernel types, integrating these findings into practical, often-used kernel combinations like those for the CO2 Mauna Loa dataset would undeniably deepen our understanding. As we noted in our discussion, broadening our framework to encapsulate other kernel families is an intriguing and vital future research direction.
However, we'd like to underscore that extending these insights to some kernel families can be particularly challenging due to the absence of an analytic form of their spectral densities. Instead, we performed experiments on this dataset with Matérn kernels, see Figure 3 in the one-page PDF for rebuttal. On this new dataset, we can see that Matérn 1/2 performs similarly with the mixture of Matérn 1/2, 3/2 and 5/2; and Matérn 3/2 performs similarly with the mixture of Matérn 3/2 and 5/2, in terms of MSE, further justifying our Theorem 2.
---
Rebuttal Comment 1.1:
Title: Post rebuttal update
Comment: Thank you for putting in the time and work towards the response and tackling most of the comments and questions raised. I would hope that you would add some of this discussion into the manuscript if accepted, esp. the the large data limit and identifiability of noise variance. Overall, I think understanding identifiability and interpretability is so crucial in using GPs but rarely discussed.
I am happy to support this paper by raising my score to a 7 but keeping my confidence intact at 3.
---
Reply to Comment 1.1.1:
Title: Some updates
Comment: Thank you for recognizing the efforts we've put into addressing your comments and concerns. We genuinely value your constructive feedback, which has undeniably improved the caliber of our paper. We're excited to share further insights and updates related to the discussion section.
**Regarding the Large Data Limit**
Your acknowledgment of the Mauna Loa kernel's relevance is deeply appreciated. While we primarily engage with the Matern mixture kernel, the Mauna Loa dataset's significance within our theoretical framework became evident. Our preliminary investigations are captured in Figure 3 of the rebuttal's 1-page PDF. Informed by the insights from both Reviewer dNVU and Reviewer j39y, we embarked on extended experiments. These not only reinforced our earlier findings but also presented novel viewpoints.
To provide a concrete response, we applied the following six kernels to the Mauna Loa dataset, covering the years from 1960 to 2020: 1/2+3/2+5/2, 1/2+3/2 ,1/2, 3/2, 3/2+5/2, 5/2. By varying the training sample size from 10% to 90%, and performing 10 replications for each size, we gauged performance across different training sizes. The resultant RMSE (std) values for all kernels are detailed below:
% 1/2+3/2 1/2 3/2
10 2.20 (0.09) 2.55 (0.12) 2.28 (0.07)
20 1.97 (0.06) 2.38 (0.08) 2.48 (0.53)
30 1.74 (0.05) 2.03 (0.11) 2.20 (0.28)
40 1.50 (0.07) 1.61 (0.12) 1.35 (0.18)
50 1.25 (0.06) 1.28 (0.09) 0.97 (0.19)
60 1.01 (0.09) 0.98 (0.10) 0.69 (0.13)
70 0.85 (0.10) 0.82 (0.10) 0.52 (0.12)
80 0.70 (0.07) 0.68 (0.07) 0.40 (0.03)
90 0.57 (0.04) 0.55 (0.04) 0.35 (0.03)
Key observations emerged
Here we highlighted the result between (1/2+3/2, 1/2, 3/2), (detailed results for all six kernels could refer to our comments to Reviewer dNVU: "Further clarification and experiments for Reviewer dNVU's Suggestion"). The mixture kernel 1/2+3/2 and Matern 1/2 converge when the sample size ratio reaches 50%. This empirical finding supports the theoretical equivalence of these three GPs as posited in Theorem 2. We added a corollary claiming that the MSE of the mixture kernel asymptotically matches the MSE of its least smooth component.
In the realm of finite samples, we observed intriguing nuances. Theoretically, while the mixture kernels 1/2+3/2, and the standalone 1/2 are deemed equivalent, their performance could potentially distinct with limited samples. Although securing a conclusive theoretical support for the finite sample regime is challenging, we managed to provide some insights to interpret our findings. To do so, we have to delve further into the proof of Stein's Theorem 1 in Stat \& Prob Letter (1991), which guides us to Theorem 3.1 in Stein (AoS 1990). The proof suggests that the relative difference between the MSEs rests on the tail of the series in the second-last line on Page 854. This difference is influenced by both the sample size (denoted as 'N' in Stein 1990) and by $b_{jk}$ and $\mu_j$, as defined on the same page. For a fixed sample size, smaller values of $(b_{ij}+\mu_j\mu_k)^2$ result in a smaller relative difference between MSEs. By the definition of $b_{jk}$ and $\mu_j$, the more "different" the two kernels are, in other words, the more "significant" the additional smoother components become — the greater the difference between the MSEs will be.
In essence, we have deepened our exploration into the differences between finite and infinite sample scenarios. While our foundational theories rest upon infinite samples, we've endeavored to give both theoretical and practical insights into the finite sample scenario.
**On the Identifiability of Noise Variance**
Post rigorous deliberation, we have deduced the analogue of Theorem 2 concerning noise (nuggets). Let $\tau^2$ and $\widetilde{\tau}^2$ be the noise variance of $K$ and $\widetilde{K}$, then if $\tau^2\neq\widetilde{\tau}^2$ then $K\not\equiv \widetilde{K}$; if $\tau^2=\widetilde{\tau}^2$, the previous results in Theorem 2 hold. That is, the noise variance is identifiable, and the existence of the noise does not affect our claim to the noiseless case. Due to page limit, we briefly sketch the proof in this response. First, by our Theorem 1, both $K$ and $\widetilde{K}$ are mean-square continuous. Then apply Lemma 1 in Tang, Zhang, Banerjee (JRSSB 2022), we conclude that if $\tau^2\neq\widetilde{\tau}^2$ then $K\not\equiv \widetilde{K}$. Similarly, if $\tau^2=\widetilde{\tau}^2$, Theorem 2, for noiseless case, follows.
We are keen on weaving these enriched insights into our manuscript's discussion section. Your elevated score and unwavering support significantly encourage our ongoing commitment to this research. | Rebuttal 1:
Rebuttal: We express our sincere gratitude to the reviewers for their meticulous examination of our manuscript and their insightful comments. Your observations and recommendations have significantly enhanced the quality and content of our paper, and we have learned much from your comments. In this global response, we will discuss some common interest and concern across all the comments, go through the new changes in the manuscript. And at the end, we will explain the detailed information about the figures in the one-page PDF.
**1. (Reviewer PY7Z, LXB2, j39y)** We wish to address the concern regarding the large data limit. Our theory is built on the asymptotic side, and as such, our results remain consistent as $n\to\infty$.
**2. (Reviewer PY7Z, dNVU, LXB2)** We noted the shared interest in extending our methods to other kernels, such as RQ and periodic kernels. These particular kernels lack the analytic spectral densities necessary for a theoretical analysis of Gaussian Processes, which would necessitate more complex mathematical approaches. While our current study does not encompass these kernels, we have initiated a new simulation in response to reviewer PY7Z's interest in the kernel $K(x,x')=\sum_{l=1}^pw_l\text{Matern}(x_l-x_l';\sigma^2_l,\alpha_l,\nu)$, where $x_l$ represents the $l$-th coordinate of $x$. Although the primary tool we used to prove equivalence of measures is not directly applicable in this context, we have formulated a conjecture that the microergodic parameter of each dimension remains identifiable, while other individual parameters may not be identifiable, along with simulation study to support this conjecture.
**3. (Reviewer dNVU, LXB2, j39y)** The importance of smoothness in our paper was another focal point. We would like to reiterate that the smoothness of a mixture kernel is influenced by the least smooth kernel within it. This property has direct implications for the conditions in our proof, and we've conducted new experiments to numerically verify Theorem 1.
**4. (Reviewer LXB2, j39y)** We agree that the limited performance of the Matérn kernel may affect the presentations and raise concern. In acknowledgment of this issue, we have conducted two new experiments: MNIST image recovery and prediction on Mauna Loa CO$_2$. For the MNIST dataset, Matérn kernels exhibit satisfactory performance in recovering the image. Mauna Loa CO$_2$ is a well known example that has been widely used on Matérn kernels and other mixture kernels. Interestingly, Matérn 3/2 performed better than Matérn 1/2 in our experiments. However, in Matérn mixture of $\nu=1/2,3/2,5/2$, adding Matérn 3/2 did not improve performance, and the mixture kernel's performance remained similar to Matérn 1/2. This finding further confirms our point. We appreciate your feedback, which has assisted in refining our study.
In response to the feedback received, we have made substantial changes to our manuscript to address the concerns raised. These revisions include the addition and clarification of terminology uncommon in the machine learning community. We have polished all the figures to enhance their clarity and presentation. While we are unable to submit a revised manuscript due to submission rules, the styles of the revised figures can be referred to the new figures included in the one-page PDF. Furthermore, we have expanded our literature review to include detailed information about previous work on the application of mixture kernels, such as the specific types of kernels used in various analyses, and the identifiability work concerning Matérn kernels. This added context should strengthen the connection between our work and existing research. We also included 6 new experiments.
Our one-page PDF includes the visual presentation of our newly conducted experiments. We conducted four new simulations and two real data applications.
**Figure 1 (Reviewer PY7Z)** Simulation 2 with larger sample sizes, where we added experiments with sample sizes of n=1000,3000 to validate our conclusions in much larger sample case, thus reinforcing our results.
**Figure 2 (Reviewer PY7Z)** New simulation concerning the kernel $K(x,x')=\sum_{l=1}^p w_l\text{Matern}(x_l-x_l';\sigma^2_l,\alpha_l,\nu)$, leading us to conjecture that microergodic parameters for each dimension are identifiable while all other parameters remain unidentifiable.
**Figure 3,4 (Reviewer PY7Z,dNVU, LXB2, j39y)** Mauna Loa CO$_2$ data in 2022 (smoother) and MNIST data (non-periodic), as suggested by reviewers. Our experiments support our claim that the performance of mixture kernel is determined by the least smooth component rather than the best-performed component.
**Figure 6 (Reviewer dNVU, LXB2, j39y)** New simulation to support Theorem 1, allowing us to confirm that Matérn 3/2 is continuous and differentiable, while Matérn 1/2 and the Matérn mixture 1/2+3/2 are continuous but not differentiable.
**Figure 5,7 (Reviewer LXB2, j39y)** Simulation 2 with the L-BFGS optimizer, responding to reviewer concerns by verifying our theorem across various optimizers and learning rates, thus reinforcing our confidence in the results. Additionally, following the reviewers' suggestions, two new panels about $w_2\sigma_2^2\alpha_2^{2\nu_2},w_3\sigma_3^2\alpha_3^{2\nu_3}$ were added to the figures of all simulations to better substantiate our point that only the microergodic parameter for the least smooth component is identifiable.
We have undertaken extensive work in this rebuttal and devoted significant effort to refining our manuscript. We hope that the newly added explanation and experiments could answer your questions and concerns. We would be more than happy to discuss more in the discussion period. Once again, we thank the reviewers for their thoughtful examination and invaluable feedback. Your contributions have not only enhanced our current work but also promise to influence and inspire further research in this field.
Pdf: /pdf/7cd08a9b5dcdfbb92bf26fabcb458fa30ae9d96c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Adversarial Training from Mean Field Perspective | Accept (spotlight) | Summary: The authors proposed a new theoretical framework based on the mean field theory to analyse adversarial training from several perspectives, such as the upper bounds of adversarial loss, the time evolution of weight variance and adversarially trainable conditions. Besides the theoretical analysis, the authors conducted several experiments verifying the proposed theoretical framework.
Generally speaking, the proposed theoretical framework provides a new perspective to analyse adversarial training and is highly versatile and even can extend to other training methods.
Strengths: The paper is organised well and easy to follow.
The proposed theoretical framework seems inspiring and intriguing and gives a new perspective to analyse adversarial training from several aspects, which may serve as a good guidance for future work. Besides the proposed theory, verification experiments were also conducted to prove its effectiveness further.
Weaknesses: 1. The verification experiments were only conducted on the easy dataset (MNIST); it may strengthen the findings if additional experiments are conducted on more challenging datasets.
2. Including a more systematic evaluation of the results may be beneficial, such as adversarial loss in residual networks vs. training steps for normally or adversarially training.
3. Several conclusions match the previous works; it could be more convincing if the reference could be given in the main content, such as ‘’the square sum of the weights in Ineq. (9) suggests that adversarial training exhibits a weight regularisation effect,’’ is consistent with [1], ‘’to achieve high capacity in adversarial training, it is necessary to increase not only the number of layers L but also the width N to keep L^2/N constant’’ is somehow consistent with [2].
4. The authors mentioned in Line 260 that ‘’This result suggests that residual networks are better suited for adversarial training. However, one of the previous studies indicated that residual networks are more vulnerable to transfer attacks [54] than vanilla networks. ’’ However, the proposed theoretical framework did not explain such a transfer attack phenomenon. Could the authors explain or give more comments about this?
5. in Equation 3 and 4, it seems that the minimize part is not shown, perhaps it would be more reasonable to change the form of min-max in equation 3 and 4.
[1] A unified gradient regularization family for adversarial examples, in: IEEE International Conference on Data Mining (ICDM), 2015.
[2] Do wider neural networks really help adversarial robustness? Advances in Neural Information Processing Systems, 34.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See Weakness above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have clearly discussed the limitations of the proposed framework, such as, ‘’some theorems begin to diverge from the actual behaviour‘’ and ’’ the mean field theory assumes infinite network width, which is practically infeasible’’.
From my point of view, this article does not involve any potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank your careful reading.
> The verification experiments were only conducted on the easy dataset (MNIST); it may strengthen the findings if additional experiments are conducted on more challenging datasets.
We have only tested our theory on simple datasets. This is because fully connected networks struggle to achieve high training accuracy and generalization performance on more complex datasets, such as CIFAR-10 and ImageNet, even in standard training. This complicates a fair comparison between standard and adversarial training. To perform both theoretical and experimental analysis of adversarial training on complex datasets, our theoretical results would need to be extended to more practical architectures, such as convolutional networks. Note that Table A8 shows results for Fashion-MNIST that are consistent with our theoretical findings. We also highlight that using MNIST for verification is a common practice in several theoretical approaches to adversarial defense, such as certified adversarial defense [3,4].
[3] Certified defenses against adversarial examples, ICLR18.
[4] Semidefinite relaxations for certifying robustness to adversarial examples, NeurIPS18.
> Including a more systematic evaluation of the results may be beneficial, such as adversarial loss in residual networks vs. training steps for normally or adversarially training.
Our extensive experiments cover each of our theoretical claims for vanilla and residual networks. Some results were sent to Appendix due to the page limitation. However, thanks to the reviewer’s comments, we found that some results for residual networks were not included, such as how the adversarial loss in residual networks changes during adversarial training. We will include these results in the updated version, along with more visually clear illustrations and comparisons.
> Several conclusions match the previous works; it could be more convincing if the reference could be given in the main content, such as "the square sum of the weights in Ineq. (9) suggests that adversarial training exhibits a weight regularisation effect," is consistent with [1], "to achieve high capacity in adversarial training, it is necessary to increase not only the number of layers $L$ but also the width $N$ to keep $L^2/N$ constant" is somehow consistent with [2].
We appreciate the reviewer’s introduction to related work and will include comparisons with these studies in the main text.
[1] suggested that adversarial training with the fast gradient sign method regularizes network weights around training instances, which aligns with Theorems 5.3 and F.9. However, their research relies on a strong assumption about the network's Jacobian, which does not strictly hold for realistic data distributions. Our study does not require such an assumption. Furthermore, we have provided explicit details regarding the time evolution of weight variance and the impacts of network width, depth, and other structures that were not provided in [1].
From a perturbation instability perspective, [2] demonstrated that increasing network width does not necessarily improve robustness. This may seem to contradict our result, which suggests that a wider network can help the model maintain capacity during adversarial training, implying greater robustness in wider networks. However, these two claims are compatible. Robustness is determined by both perturbation instability (negative effect) and network capacity (positive effect). While the negative effect of width appears dominant in [2]'s experiments on CIFAR-10 and WideResNet, the positive effect appeared more prevalent in our experiments on MNIST, Fashion-MNIST, and fully connected networks with or without shortcuts. The dominant factor may depend on the dataset and model architectures.
> The authors mentioned in Line 260 that "This result suggests that residual networks are better suited for adversarial training. However, one of the previous studies indicated that residual networks are more vulnerable to transfer attacks [54] than vanilla networks." However, the proposed theoretical framework did not explain such a transfer attack phenomenon. Could the authors explain or give more comments about this?
In practice, adversarial training of residual networks is more common than that of vanilla networks. We mentioned transfer attacks as a motivation for analyzing adversarial training of vanilla networks. However, as several reviewers have pointed out, our study does not address transfer attacks. To avoid confusion, we have decided to remove this section in the updated version.
> in Equation 3 and 4, it seems that the minimize part is not shown, perhaps it would be more reasonable to change the form of min-max in equation 3 and 4.
Equations 3 and 4 do not include minimization because they define adversarial loss. The minimization of this loss with the standard loss defines adversarial training, which is described below Equations 3 and 4 (perhaps we should highlight this point more clearly). Many of our results and proofs refer to the adversarial loss, and we would like to maintain its current form without minimization.
---
Rebuttal Comment 1.1:
Title: Keep my previous rating
Comment: I appreciate the responses given by the authors which resolve some of my concerns. I would like to keep my previous positive rating. | Summary: The authors propose a mean field based framework for theoretically analyzing the training dynamics of adversarial training for MLP and residual networks with ReLU non-linearities. Based on this framework, the authors provide tight bounds on the adversarial loss (squared loss between clean and adversarial example output) and investigate trainability of MLP and residual networks.
Strengths: - While I am not very familiar with related work on mean field theory for (adversarially) training networks, the proposed framework seems to address limitations of prior work quite elegantly.
- The paper includes quite a number of interesting results using the proposed framework (loss bounds, training dynamics, trainability, impact of width).
- Formulation includes MLP and residual network with a class of “ReLU-like” non-linearities; is also includes adversarial training with clean loss (as e.g. done by TRADES).
- I feel the framework can be quite insightful and helpful for future work in understanding or improving adversarial training.
- Many claims are empirically supported on MNIST.
Weaknesses: My main concern about the paper is that it is incredibly dense (due to the number of included claims) and its structure does not make checking the proofs and claims easy. Even though I invested significantly more time in this review compared to other reviews, I was unable to fully follow all derivations and proofs. I believe this is mainly due to the extremely convoluted way of presenting the theoretical results across main paper and appendix. Unfortunately, I feel that this also makes me less enthusiastic about the results. Here are some more detailed comments and suggestions:
- Starting with 4.2, the reader is somewhat forced to jump to the appendix at least once just to see that the authors simply reformulate the ReLU using its derivative and D for J and a. This trend continues throughout the paper and appendix. Especially in the appendix, one is forced to jump around a lot just to follow 2-3 sentences.
- The proofs are structured starting with simple lemmata and building up to the actual theorems. This is generally fine, but again, the referencing is overdone. For Thm 4.1 in Appendix E, there are 9 pages of derivation with >20 individual lemmata. So if I want to follow the proof of Thm 4.1, I am forced to go through many but not all of these lemmeta. Many have proofs of only 1-2 lines, but reference 2+ other lemmata or remarks and I need to remember the numbers or jump back and forth 2+ times just to read a single sentence. I feel the root cause for this is that many of the results are over-generalized and compartmentalized too much. I appreciate the thorough job of the authors in establishing many of the independence results, but as a reviewer and reader my #1 interest is following Thm 4.1, nothing more and nothing less. Everything that complicates this job is – in my opinion – bad for the paper. For me, the ideal solution would be a separate, easier to follow section for the proof of Thm 4.1 - even if it restates many of the lemmata and remarks, and moving the other results to a separate section for the (very very) interested reader.
- The main paper includes so many results that there is basically no discussion of each individual result. Often it feels that every sentence refers to some additional result in the appendix. I think for me, and many readers, actually discussing the results informally, in words, and taking more time and space to introduce the required notation would be more beneficial than including the current amount of results. I would prefer to have fewer results well-described and the remaining ones being in the appendix.
- Empirical results are discussed twice – after the corresponding theorems as well as in Section 6 – the space of the latter could be used to address one of the points above.
Comments and questions unrelated to structure and writing:
- In the introduction, contribution (d) is unclear to me – what theoretical result does it refer to, the trainability?
- The use of “probabilistic properties” is a bit unclear until the discussion of training dynamics. It would be helpful if the meaning would be detailed earlier in the paper.
- In 3.1, why is having $P^{in}$ and $P^{out}$ important, i.e., why do we need these fixed layers?
- Usually, the adversarial loss is also cross-entropy or something similar. While I saw papers arguing for a squared loss, TRADES does not use it AFAIK. Instead, the common setting is adversarial cross-entropy loss only, combined with clean cross-entropy loss or clean cross-entropy + KL as in TRADES. This makes me ask how assuming an adversarial cross-entropy loss would impact the results? Can similar results be derived?
- In 4, the assumption of independence is also unclear. I feel making this more explicit, e.g., by informally providing a short result on independence from the appendix, could be useful.
- The authors also highlight broader applicability; I am wondering if the authors derived similar results for standard training as reference? Or has this been done in previous work with other frameworks? This also related to the statement in l280.
- In 5.3 l261, I can’t follow how transfer attacks are relevant here? Transfer attacks should be weaker than the general attack modeled in the paper …
Conclusion:
I think that the paper has many interesting contributions and will be valued by the NeurIPS community. However, as I was not able to follow all derivations in a reasonable time, I will closely follow what the other reviewers have to say about the theoretical results. Also, I believe that the paper in its current form would have better fitted a long-format journal. For NeurIPS, I hope the authors invest some time in simplifying the main paper and restructuring the appendix. I think the current format will limit the audience of the paper to those very familiar with related work or willing to invest hours jumping back and forth between main paper and appendix. Some restructuring could really make this paper more accessible to the broad audience at NeurIPS.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Structure and writing
We extend our deepest appreciation to you for the considerable time and effort you have devoted to reviewing our paper. Taking the reviewer’s comments into account, we plan to improve the manuscript as follows.
**Simplified version of Theorem 4.1.**
We plan to present a simplified claim and its proof of Theorem 4.1 for one-hidden-layer vanilla networks. If readers can understand this, we believe that they can more easily follow the claim for multi-layer networks (i.e., Theorem 4.1). This is because its proof relies on mathematical induction with respect to the layers.
In addition, one factor that complicates the proof arises from a well-known but sometimes overlooked fact: uncorrelated Gaussian variables are not necessarily independent. We have prepared many lemmas to discuss this rigorously in our context. Considering the reviewer’s feedback, as a stepping stone, we decided to prove Theorem 4.1 with an incorrect claim (i.e., uncorrelated Gaussian variables are independent) as well as the formal proof.
Note that Proposition A.1 was provided to highlight the essence of Theorem 4.1, i.e., the independence of the distributions of $J$ and $a$ from the input $x$. However, we acknowledge that this has not been sufficiently emphasized.
**Separation of proofs for vanilla and residual networks.**
Currently, some of our lemmas apply only to residual networks, which may not be relevant to readers solely interested in the results for vanilla networks. We intend to separate the formal proofs for vanilla and residual networks.
**Generality and over-compartmentalization of lemmas.**
We plan to reconsider the generality of all claims to improve readability. However, maintaining a rigorous discussion while doing so is not easy. Therefore, we will initially focus on the previously mentioned step-by-step presentations. We acknowledge the over-compartmentalization and try to group lemmas with similar claims (e.g., Lemma E20--23).
**Simplification of main text.**
Although we have struggled with this, we believe that all claims in the main text are essential. We are not considering removing an entire theorem. Instead, we intend to increase intuitive explanations and informal presentations to improve readability. Following the reviewer’s suggestion, we also plan to remove some unnecessary references to experimental results to create space.
## Contents
> In the introduction, contribution (d) is unclear to me – what theoretical result does it refer to, trainability?
The content in "Other contributions" can be found in Section J. Contribution (d) corresponds to the paragraph "Mitigation of adversarial risk" in Section J. We will explicitly mention this in the updated version.
> The use of "probabilistic properties" is a bit unclear until the discussion of training dynamics.
In the updated version, we will improve the clarity of this part as follows (e.g., l30):
Before: existing mean field-based approaches cannot manage the probabilistic properties of an entire network ...
After: existing mean field-based approaches cannot manage the probabilistic properties of an entire network (e.g., the distribution of a network Jacobian) ...
> Why do we need $P^{in}$ and $P^{out}$?
They are introduced to discuss networks with general input and output sizes. They also slightly simplify the calculations as it allows us to focus on trainable layers of fixed size $N\times N$. For example, without these layers, $\chi^{(0)}/\chi^{(L)}=\omega^L$ would become $\chi^{(0)}/\chi^{(L)}=N\omega^L/d$. Note that they do not harm training.
> Can similar results be derived for adversarial cross-entropy loss?
Based on the reviewer’s suggestion, we have considered the case with cross-entropy loss, and found that similar results can be obtained. We appreciate the comment and share some of the results. Suppose that the class label of $x$ is one and that the network has no biases. The adversarial loss is defined as:
$$\mathcal{L}(x):=\max\_{\|\eta\|\_\infty\leq\epsilon}\left(-\ln\frac{\exp(J(x+\eta)\_{1\cdot}(x+\eta))}{\sum^K\_{k=1}\exp(J(x+\eta)\_{k\cdot}(x+\eta))}\right).$$
For simplicity, we assume that (I) $x=0$. (II) $\epsilon$ is sufficiently small, and thus $J(x+\eta)=J(x)$ (cf. l931). (III) the input dimension is sufficiently large. Then,
$$\mathcal{L}(0)\leq\ln\left(\sum^K\_{k=1}\exp\left(\max\_{\|\eta\|\_\infty\leq\epsilon}J(0)\_{k\cdot}\eta\right)\right)+\max\_{\|\eta\|\_\infty\leq\epsilon}J(0)\_{1\cdot}\eta.$$
For any $k\in\\{1,\ldots,K\\}$ (cf. (A105)),
$$\max\_{\|\eta\|\_\infty\leq\epsilon}J(0)\_{k\cdot}\eta=\mathcal{O}(\omega^{L/2}).$$
Thus,
$$\mathcal{L}(0)\leq\mathcal{O}(\omega^{L/2}).$$
This is consistent with Theorem 5.1, which also gives $\mathcal{O}(\omega^{L/2})$. Thus, we consider that a similar discussion can be applied to Theorem 5.3, as well as those based on it (i.e., Theorems 5.6-5.8).
> In 4, the assumption of independence is unclear.
In this paper, "independence" was used in the probabilistic sense: two random variables are called independent if their joint probability equals the product of their probabilities.
> The authors also highlight broader applicability; the authors derived similar results for standard training? Or has this been done in previous work with other frameworks?
We did not apply the proposed framework to standard training because this has been done in previous studies using other mean field-based approaches. However, previous approaches are not applicable to adversarial training, which is why we developed a new framework.
By "broader applicability" we mean that the utility of Theorem 4.1 is not limited to adversarial training. Theorem 4.1 provides a simple view of a ReLU-like network that should be useful for analyzing other training methods (e.g., contrastive learning).
> In l261, how transfer attacks are relevant?
A similar question was raised by Reviewer QtQE. Please kindly refer to our reply to Revewer QtQE (due to character limitation).
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I appreciate the authors' clarifications. I have no follow-up questions and given the other positive reviews, my rating remains positive, as well. | Summary: The paper provides a mean field analysis on relu networks for adversarial training. The main insight is that networks without residual connections are not most likely inevitably suffer from gradient explosion or vanishing and thus are not adversarially trainable, unlike vanilla network training.
Strengths: Analyzing adversarial training performance is an important problem both theoretically.
The insight from the analysis that adversarial training vanilla relu networks is more likely to suffer from gradient explosion/vanishing is an interesting insight.
Weaknesses: The verification of the theorems might need more effort. E.g., Figure 4 is showing accuracy of vanilla network, it would be helpful to also show the curves for residual networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could the authors elaborate a bit on why residual connections could make a bigger difference in adversarial training than vanilla training?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to appreciate your fruitful suggestion and question.
> The verification of the theorems might need more effort. E.g., Figure 4 is showing accuracy of vanilla network, it would be helpful to also show the curves for residual networks.
The training accuracy of adversarial training in residual networks is presented in Figure A19. Due to the complexity of the heatmap design, we believe that combining them into a single image would be challenging. In the updated version, we plan to clarify in the caption of Figure 4 that the results for residual networks can be found in Figure A19.
> Could the authors elaborate a bit on why residual connections could make a bigger difference in adversarial training than vanilla training?
This is because adversarial training has a strong weight regularization effect (cf. Theorem 5.3), while vanilla training (standard training) does not. As adversarial training progresses, the network weights become smaller. Therefore, without residual connections, the small weights lead to gradient vanishing as the input signals diminish during the forward pass. Note that network weights also decay during standard training with L2 regularization. However, adversarial training causes the weights to decay at a significantly faster rate than L2 regularization (cf. Section H in Appendix). | Summary: The theoretical understanding of adversarial training is an important and valuable topic. This work proposes a new theoretical framework for this based on mean field theory. With the proposed framework, the authors analyze the properties of adversarial training from multiple aspects, including the upper bounds of adversarial loss, the time evolution of weight variance, the adversarial trainable conditions, and the degradation of network capacity. These results could be helpful for the understanding of adversarial training and inspire more efforts on this topic.
Strengths: 1. It proposes a new framework to analyze adversarial training theoretically based on mean field theory.
2. Based on the proposed framework, it presents several theoretical results for adversarial training.
3. The proposed framework and the presented theoretical results are non-trivial and helpful for the understanding of adversarial training.
Weaknesses: It studies several different adversarial training characteristics in the main paper. Is there any correlation between these different characteristics? Why do we choose these aspects for analysis? Further, is it possible to provide a global diagram to better see which properties can be analyzed and which cannot be analyzed at present based on the proposed framework?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I am not an expert on mean field theory and didn't check all the proofs. I give my rating by considering that 1) the theoretical understanding of adversarial training is important and the progress on this is very helpful for the community and 2) the proposed framework seems to be generic and may inspire more work in the future.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank your insightful comments.
> It studies several different adversarial training characteristics in the main paper. Is there any correlation between these different characteristics? Why do we choose these aspects for analysis?
Our theoretical results on adversarial training can be divided into two categories: those related to training dynamics (Theorems 5.6 and 5.7) and those related to test time performance (Theorem 5.8). Both are fundamental aspects of machine learning. More specifically, Theorems 5.6 and 5.7 answer why vanilla networks cannot achieve high training accuracy in some situations, while residual networks always can. Theorem 5.8 addresses the degradation of network capacity, which is tied to the generalization performance of networks, i.e., test accuracy. Similar studies have been conducted on standard training [1,2]. However, the untrainability of vanilla networks and the loss of capacity are unique to adversarial training. Thus, our choice of analysis allows us to highlight the significant differences between adversarial and standard training.
Theorems 5.1 and 5.3 present the upper bounds of the adversarial loss and the time evolution of weight variance, respectively. While they were originally formulated to derive Theorems 5.6-5.8, they are also interesting results in their own right.
[1] Deep information propagation, ICLR17.
[2] Universal statistics of Fisher information in deep neural networks: Mean field approach, AISTATS19.
> Further, is it possible to provide a global diagram to better see which properties can be analyzed and which cannot be analyzed at present based on the proposed framework?
The proposed framework can analyze the following properties of adversarial training: (i) the upper bounds of the adversarial loss, (ii) time evolution of weight variance, (iii) adversarial trainability, and (iv) degradation of network capacity. Moreover, we believe that our approach can be applied to other deep learning methods besides adversarial training (cf. Section 4.2).
A major limitation of our analysis, which is common to all mean field-based analyses, is its theoretical restriction to the early stages of training, making it not directly applicable to full training process. Nevertheless, mean field-based analyses offer superior flexibility compared to other approaches; they can handle multi-layer perceptrons (including shortcuts) with nonlinear activations without imposing any assumptions on the data distribution. We have discussed this in Section 7, but will strive to make it clearer in the updated version.
Note that it has been observed empirically that the theoretical results from the early stages of training align well with fully trained networks [3,4]. In this study, for example, our theoretical prediction in Theorems 5.8 and F.16 - network width strongly influences generalization performance in adversarial training - holds even for fully trained networks (cf. Tables A7 and A8). Therefore, even though our framework is not strictly applicable to fully trained networks, we believe that it can still provide important insights for fully trained networks.
[3] First-order adversarial vulnerability of neural networks and input dimension, ICML19.
[4] Adversarial robustness guarantees for random deep neural, ICML21. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' critical reading, constructive comments, and overall positive scores. We have carefully taken into account all the comments and questions. Please kindly refer to the response to each reviewer. If our answers require further explanation or clarification, we are more than willing to provide them. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Trichotomy for Transductive Online Learning | Accept (poster) | Summary: This paper studies realizable transductive online learning on a fixed and *known* sequence $x_1,\dots,x_n$ (meaning the order is given). The authors state a trichotomy of error rates depending on the finiteness of the VC dimension and Littlestone dimension of the given hypothesis space. Along the way they improve a lower bound on the number of mistakes in terms of the Littlestone dimension. They use the relationship to the threshold dimension to achieve their bounds. Finally, they extend their results to the multi-class case.
Strengths: Nice overview. Well written. Easy to follow proofs.
---- rebuttal comment -----
As authors addressed multiple concerns and added additional results on the agnostic case, I raised my score from 6 to 7. I think this paper is valuable for the online learning community.
Weaknesses: Most of the claimed results are already known or exist implicitly. In particular, all the bounds used in the trichotomy are standard (either from Littlestone's Halving / SOA, or from the online vs offline paper). Even the additional $\log(n)$ lower bound given by the threshold dimension was essentially covered by an example (called "$\sigma_{\text{worst}}$") in [BKM97] (without using the threshold dimension explicitly though). The actual main novelty (which is still a decent and interesting contribution) is the improvement of the lower bound from $\sqrt{\log(LD(H))}$ to $\log(LD(H))$.
The informal version of Thm 4.1. is slightly misleading. E.g., case 2 If $VC(H)=n$ the rate is still $\Theta(n)$ instead of $\log(n)$.
I would suggest writing up the bounds for a finite instance space $|\mathcal{X}|=n$ as corollaries. This might be interesting for various settings (e.g., node classfication) and guide the intuition of the reader.
I am not sure referring to the setting here with the sequence $(x_1,\dots,x_n)$ known in advance to the learner as "transductive" is the best idea. Transductive typically refers to the set (without knowing the order) $\\{x_1,\dots,x_n\\}$ to be known. Even one of the referenced papers [KK05], which the authors use to justify the name change from $M_{worst}$ to transductive, is merely referring the fact that the set is known but not necessarily the sequence (e.g., "the set of unlabeled examples is then presented to the learner") as transductive. Similarly for instance, the paper [CS13] refers to the fact that the set is known (instead of the sequence) as transductive "In this model, we have an arbitrary sequence of
labeled examples $(x_1, y_1), . . . , (x_T , y_T)$, where only the set $\\{x_1, . . . , x_T\\}$ is known". I might be wrong though. Please discuss.
Small typos and writing
* line 126: $u_{\leq k}$ should probably be $x$?
* line 202: "Claim claim"
* line 235: "makes $d$ mistakes --> "makes at most $d$ mistakes".
* line 236: What is $m$? Probably $d$?
Given these weaknesses I would say that this work is rather incremental and has limited novelty. Nevertheless it offers some nice new connections (e.g., threshold dimension) and gives a nice overview (the trichotomy) about bounds which (implicitly or explicitly) exist in previous work.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: Essentially the results (as most such online learning papers) leave the gap between $VC+\log n$ and $VC \cdot \log n$ open. Do you have any ideas how to close this gap? Sure you state that both bounds are tight on specific instance sequences, but maybe there is a different combinatorial parameter (probably depending on the particular sequence?) interpolating between "$+$" and "$\cdot$" allowing to get instance-dependent tight bounds.
You briefly discuss some issues in the multi-class case with $|\mathcal{Y}|=\infty$. However there is the recent work by Brukhim et al [FOCS 2022] using the Daniely-Shalev-Shwartz (DS) dimension to characterize multi-class learnability even if $|\mathcal{Y}|=\infty$. Do these results apply in your setting by swapping the Natarajan dimension (or graph dimension etc.) with the DS dimension?
Is the $\log(LD(H))$ lower bound best possible? I.e., are there specific examples where this is tight (meaning, same upper bound).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 2 fair
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment:
Most of the claimed results are already known or exist implicitly. In particular, all the bounds used in the trichotomy are standard [...]. Even the additional lower bound given by the threshold dimension was essentially covered by an example (called "sigma_worst") in [BKM97] (without using the threshold dimension explicitly though). The actual main novelty (which is still a decent and interesting contribution) is the improvement of the lower bound from
sqrt(log(LD)) to log(LD).**
**Answer:**
We believe our paper makes some neat and non-trivial contributions, see our **_General Answer 1_** in the Author Rebuttal section.
**Comment:
The informal version of Thm 4.1. is slightly misleading. E.g., case 2 If $V C(H)=n$ the rate is still $\Theta(n)$ instead of $\log (n)$.**
**Answer:** Following the suggestion of Reviewer GnLU, we will add a comment that the constants in cases 2 and 3 of the informal trichotomy hide a dependence on the VC and Littlestone dimensions, respectively.
**Comment:
I would suggest writing up the bounds for a finite instance space $|\mathcal{X}|=n$ as corollaries. This might be interesting for various settings (e.g., node classfication) and guide the intuition of the reader.**
Interesting suggestion! We are happy to do add this to the paper. Are there any references or specific examples that you have in mind? Were you perhaps thinking of something like Herbster, Pontil & Wainer "Online learning over graphs", or something else?
**Comment:
I am not sure referring to the setting here with the sequence $\left(x_1, \ldots, x_n\right)$ known in advance to the learner as "transductive" is the best idea. Transductive typically refers to the set (without knowing the order) $\\{x_1, \dots, x_n\\}$ to be known.**
**Answer:** Thank you for pointing this out! There appears to be some ambiguity in the literature regarding which of these settings should be called "transductive". For instance, Syrgkanis, Krishnamurthy and Schapire (http://proceedings.mlr.press/v48/syrgkanis16.pdf) write of "a transductive setting (Ben-David et al., 1997) in which the learner knows the arriving contexts a priori, or, less stringently, knows only the set, but not necessarily the actual sequence or multiplicity with which each context arrives." We agree that it is better to avoid this ambiguity. To that end, we will qualify the name used in our paper, distinguishing between a "sequence-transductive setting" and "set-transductive setting" or something similar. Suggestions for names are welcome!
**Comment:
Essentially the results (as most such online learning papers) leave the gap between $VC+\log n$ and $VC \cdot \log n$ open. Do you have any ideas how to close this gap? Sure you state that both bounds are tight on specific instance sequences, but maybe there is a different combinatorial parameter (probably depending on the particular sequence?) interpolating between "+" and "\cdot" allowing to get instance-dependent tight bounds.**
We agree that this is an intriguing question. We don't know the answer, and we intend to continue investigating this in future worsks.
**Comment:
You briefly discuss some issues in the multi-class case with $|\mathcal{Y}|=\infty$. However there is the recent work by Brukhim et al using the DS dimension to characterize multi-class learnability even if $|\mathcal{Y}|=\infty$. Do these results apply in your setting by swapping the Natarajan dimension with the DS dimension?**
**Answer:**
Excellent question! It is indeed interesting to note that the **_DS dimension does not characterize learnability_** in the transductive online learning setting. Here is a counterexample: for any natural $n$, we construct a hypothesis class $\mathcal{H}$ that has DS dimension $1$, but the adversary can force $M(\mathcal{H},n)=n$ mistakes. The class is defined as follows. Let $\{x_0,\dots,x_{n-1}\}\subseteq \mathcal{X}$ be distinct instances. Consider a complete binary tree of depth $n$, such that for each $i \in \{0,1,...,n-1\}$, all the nodes at layer $i$ of the tree (i.e., at distance $i$ from the root) are labeled with instance $x_i$, and each edge in the tree is labeled with a unique label (that does not appear anywhere else in the tree). Let $\mathcal{H}$ be a set functions that shatters this tree. Observe that the adversary can select the sequence $(x_0,\dots,x_{n-1})$ and force $n$ mistakes. We now argue that the DS dimension of $\mathcal{H}$ is $1$.
Assume for contradiction that there exists a $2$-pseudocube for $\mathcal{H}$ in the DS sense. Namely, there exists a collection of vectors of length 2, $C = \{((x_i^1,y_i^1),(x_i^2,y_i^2)): i \in I\} \subseteq (\mathcal{X}\times\mathcal{Y})^2$ such that each vector is realizable by a function in $\mathcal{H}$, and for each $v = ((x_i^1,y_i^1),(x_i^2,y_i^2)) \in C$ and $j \in \{0,1\}$ there exists $\tilde{v} = ((x_i^1,\tilde{y}_i^1),(x_i^2,\tilde{y}_i^2)) \in C$ such that $y^j_i = {\tilde{y}_i}^j$ and $y_i^{1-j} \neq \tilde{y}_i^{1-j}$.
Fix $j$ and a pair $v, \tilde{v}$ that agree on the label of $x_i^j$ and disagree on the label of $x_i^{1-j}$. Choose $v$ and $\tilde{v}$ that $x_i^j$ appears deeper down in the tree than $x_i^{1-j}$ does (this is possible because $C$ is a pseudocube). Because all the labels in the tree are unique, if two functions agree on the label for the deeper node $x_i^j$, then they must also agree on the label for the less deep node $x_i^{1-j}$. This is a contradiction to the assumption that both $v$ and $\tilde{v}$ are realizable by $\mathcal{H}$.
**Comment:
Is the log(LD) lower bound best possible? I.e., are there specific examples where this is tight (meaning, same upper bound).**
Excellent question. We have an idea that could potentially yield an $\sqrt{\mathsf{LD}}$ upper bound, but it is quite complex and would justify an entirely new paper (if it works). Even if that proof works, the gap between the upper and lower bounds would still be large.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications.
Interesting that the DS dimension does not characterise this setting here!
I like the terms set-transductive vs sequence-transductive. Either way, some paragraph on related work discussing this ambiguity would be more than sufficient and helpful for readers.
Do you think that your derivation of the $log(TD(H))$ lower bound is essentially using the same arguments as the one discussed for the specific sequence $\sigma_\text{worst}$ in BKM97? Please comment.
Another small comment. Maybe indicate in Thm 4.1 that $C(H)\leq d$ (with $d=Ldim$) otherwise it is unclear how large this quantity might be without looking at the proof.
Finally, I'd be happy if you can give more comments on the agnostic setting mentioned by another reviewer. Do you have some first results?
---
Reply to Comment 1.1.1:
Comment: Thank you for your interest! We have added a detailed discussion on the agnostic case, please see **_General Answer 2: The Agnostic Case_** above.
We will address your remaining points soon :) | Summary: This paper studies transductive online learning, which differs from standard learning in that the adversary must commit to a sequence of instances to be labelled by the learner at the start of the game. The adversary's strategy can thus only be adaptive w.r.t. the labeling of the sequence, not the sequence itself. The goal of the paper is to bound the number of mistakes made by learning algorithms on a sequence of length $n$.
The main result is comprised of three different set ups:
- If the VC dimension of a concept class is infinite, then the number of mistakes is $n$,
- If the VC dimension is finite, but the Littlestone dimension is infinite, then the number of mistakes is logarithmic in $n$,
- Finally, if the Littlestone dimension is finite, then the number of mistakes is constant (in the sense that it does not depend on $n$ -- it will however depend on the Littlestone dimension).
Strengths: - Conceptually interesting to have a trichotomy that depends on the VC and Littlestone dimensions
- The paper is well-written and easy to follow, especially the proofs, which are clear and well-explained
- The proof of Theorem 3.1 is interesting, and carries the technical weight of the paper, in addition to being a significant improvement on previous lower bounds
- This paper seems of interest to the online learning theory community
Weaknesses: - Apart from the proof of Theorem 3.1, the proofs seem relatively straightforward, so the originality/novelty of the work is mainly conceptual, not technical. More specifically, items 1 and 3 from Theorem 4.1 are applications of the definitions of the VC and Littlestone dimensions -- but Theorem 4.1 is the main contribution of the paper, apart from the lower bound. Item 2 on appearance seems more substantial but l.218-225 are simply explaining the Halving algorithm, leaving the main technical tool to be an application of the Sauer-Shelah-Perles lemma.
- A substantial improvement would come from getting a tighter bound w.r.t. the Littlestone dimension for the constant in item 3 of Theorem 4.1, as for now the proof (l.234-236) follows directly from the definition of the Littlestone dimension (of course, easier said than done! I just think there is some technical weight missing in the paper)
I was hesitating between a 4 and 5, but opted for 5 because of Thm 3.1 and the conceptual contribution.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Questions
- l.236: what is $m$? Do you mean $d$?
- Is there a fundamental difference in the derivation of the results for the multiclass setting (vs binary)?
Comments:
- Informal version of Thm 4.1 (l.69): I think it should be specified that the constants in cases 2 and 3 hide a dependence on the VC and Littlestone dimensions, respectively.
- l.202: "Claim claim"
- Perhaps include a future work/conclusion section?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment:**
**Apart from the proof of Theorem 3.1, the proofs seem relatively straightforward, so the originality/novelty of the work is mainly conceptual, not technical. [...] I just think there is some technical weight missing in the paper.**
**Answer:**
We believe our paper makes some neat and non-trivial contributions, see our **_General Answer 1_** in the Author Rebuttal section.
**Comment: l.236: what is m? Do you mean d?**
**Answer:** Yes, this should be $d$. Thanks for identifying this typo!
**Question:**
**Is there a fundamental difference in the derivation of the results for the multiclass setting (vs binary)?**
**Answer:**
Technically, the multi-class case involves some considerable additional work, as we explain in the third bullet in _General Answer 1_ in the Author Rebuttal section.
**Comment:**
**Informal version of Thm 4.1 (l.69): I think it should be specified that the constants in cases 2 and 3 hide a dependence on the VC and Littlestone dimensions, respectively.**
**Answer:**
Thanks for the suggestion! We will add this.
**Question:**
**Perhaps include a future work/conclusion section?**
**Answer:**
The camera-ready version will include a Future Works section, touching on the agnostic case, and discussing our thoughts on obtaining sharper bounds in the $\Theta(\log(n))$ and $\Theta(1)$ cases of the trichotomy.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the general reply and for the specific answers to my questions. I think it would be worth highlighting the technical contribution of the multi-class setting in the main paper, even at a high level -- otherwise it seems like there isn't much difference with the binary setting. Given the page limit, there is definitely enough room to add this discussion/overview! | Summary: The paper considers the realizable case in the transductive online learning setting. It shows that, for a sequence of length $n$, the number of errors is $\Theta(n)$, $\Theta(\log n)$ or $\Theta(1)$ depending on the finiteness of the VC dimension and the Littlestone dimension. In the last case, the paper also improves the dependence on the Littlestone dimension $D_L$ from $\Omega(\sqrt{D_L})$ to $\Omega(D_L)$.
Strengths: The paper looks sound and their main result is clear.
Weaknesses: I have the following concerns:
1. The significance of the main contribution. In Theorem 4.1, the only non-trivial case is the upper bound in Item 2, and, as I understand, it follows from [KK05]. Maybe pointing out the trichotomy is significant in itself, but I'm not in the area, so I can't judge it.
2. Presentation should be improved. It took me some time to parse Definition 2.4, and I could do this only because I knew the definition before. Both this definition and the reasoning in lines 185-188 are better replaced with pictures.
3. I didn't judge by this point, but I suspect that not all relevant references are included. E.g. I think that some papers [KK05] should potentially be cited.
Minor issues:
* Line 202: "claim" is repeated
* Why did you move some material to the supplementary? There should be enough space in the main body.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Could the multi-dimensional case be stated in terms of graph dimension? Then there would be no $\log k$ factor.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 1 poor
Limitations: The paper only considers the realizable case
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment:**
**The significance of the main contribution. In Theorem 4.1, the only non-trivial case is the upper bound in Item 2, and, as I understand, it follows from [KK05]. Maybe pointing out the trichotomy is significant in itself, but I'm not in the area, so I can't judge it.**
**Answer:**
Please see General Answer 1 above. Regarding [KK05], note that:
- They do not discuss lower bounds
- Their upper bounds are a bit different. Specifically, they use a randomized or “hallucination” algorithm to show an expected bound whereas we show a worst-case bound (we also shave off a log(d) factor from their result, reducing from $d\log(n))$ to $d\log(n/d))$, but that is easy).
**Comment:**
**Presentation should be improved. It took me some time to parse Definition 2.4, and I could do this only because I knew the definition before. Both this definition and the reasoning in lines 185-188 are better replaced with pictures.**
**Answer:**
Agreed, we will add figures for the definition of a Littlestone tree and for the dyadic order argument.
**Comment:**
**I think that some papers [KK05] should potentially be cited.**
**Answer:**
Thank you for pointing this out! The camera-ready version will include a discussion comparing our work with [KK05] and the other papers mentioned in Footnote 1 on Page 1.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. Can you please answer the question about the graph dimension?
---
Reply to Comment 1.1.1:
Comment: Thank you for calling this to our attention! Apologies, we accidentally omitted our answer to the question about the graph dimension.
We actually don't see a reason why using the graph dimension would allow us to eliminate the $\log(k)$ factor from the upper bound in Theorem B.2. Our upper bound uses the Sauer–Shelah–Perles lemma for the Natarajan dimension, which states that a class with Natarajan dimension $d$ and $k$ possible labels has at most $\left(e n k^2/d\right)^d$ functions, so the halving algorithm will make at most $O(d \log (n k / d))$ mistakes.
Similarly, note that a class with graph dimension $d$ can also have a number of functions that grows roughly like $k^d$, so the same upper bound would follow if we apply the same proof strategy to the graph dimension. For example, the class of all functions $f: [d] \to [k]$ has graph dimension $d$ and contains $k^d$ functions.
While this shows that the same proof would not yield an improved upper bound, it is of course entirely possible that a different proof might do so. | Summary: This work considers transductive online learning where the pool of examples to be labeled is fixed and known to the learner. Thus, the adversary can
control the order that the points are presented to the learner and the label that the learner receives in each round. The goal of the learner is to predict labels in each round so that they do as few mistakes as possible. In this work the
authors provide new results on the minimum possible number of mistakes.
They show that the minimal mistakes can either be constant, grow logarithmically, or grow linearly with the number of examples $n$ that are to be labeled by the learner. More precisely, they show that if the VC dimension of the class is infinite then the number of mistakes is $n$, if the VC dimension is bounded but the Littlestone dimension is infinite then the number of mistakes is $\Theta(\log n)$, and when the Littlestone dimension is bounded the number of mistakes is constant. Morever, when the Littlestone dimension is at most $d$ the authors show a $\Omega(\log d)$ lower bound on the number of mistakes.
Strengths: The transductive learning model is a well-motivated, and interesting learning model that may often be more realistic than the fully adversarial (standard) online setting where the adversary can choose the examples to be labeled.
This work provides a new trichotomy result and an improved bound ($\Omega(\log d)$ instead of $\Omega(\sqrt{\log d})$ that was shown in the prior work) on the number of mistakes under bounded Littlestone dimension. Moreover, the argument for the $\Omega(\log d)$ is nice.
Weaknesses:
My main concern about the main result (the trichotomy of Theorem 4.1) of this work follows rather easily from prior works. It is not hard to show that when the VC dimension is infinite the adversary can always make the learner to do $n$ mistakes. Moreover, with bounded VC dimension (less than or equal to $d$), the algorithm that achieves the $d \log (n/d)$ mistake bound is the standard halving algorithm that has been used extensively in the online learning literature. Finally for the third assertion of Theorem 4.1, when Littlestone dimension is bounded the work of Littlestone '87 directly yields a finite mistake bound. The multiclass generalization of the trichotomy is a nice generalization but follows in the same way as the binary result and does not seem to add much to the paper from a technical point of view.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weaknesses. Stating the key ideas or differences of the techniques of this work compared to the prior works could help.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment:
My main concern about the main result (the trichotomy of Theorem 4.1) of this work follows rather easily from prior works. It is not hard to show that when the VC dimension is infinite the adversary can always make the learner to do
mistakes. Moreover, with bounded VC dimension (less than or equal to $d$), the algorithm that achieves the $d\log(n/d)$ mistake bound is the standard halving algorithm that has been used extensively in the online learning literature. Finally for the third assertion of Theorem 4.1, when Littlestone dimension is bounded the work of Littlestone '87 directly yields a finite mistake bound.**
**Answer:**
We believe our paper makes some neat and non-trivial contributions, and is worthy of publication. We invite the reviewer to reconsider their position on this after reading our **_General Answer 1_** in the Author Rebuttal section.
**Comment:
The multiclass generalization of the trichotomy is a nice generalization but follows in the same way as the binary result and does not seem to add much to the paper from a technical point of view.**
**Answer:**
Specifically, see the third bullet of General Answer 1 in the Author Rebuttal.
**Comment:**
**Stating the key ideas or differences of the techniques of this work compared to the prior works could help.**
**Answer:**
The camera-ready version will include a discussion comparing our work with [KK05] and the other papers mentioned in Footnote 1 on Page 1.
---
Rebuttal Comment 1.1:
Comment: Please see our **_General Answer 2: The Agnostic Case_** above. We feel this is a nice addition! | Rebuttal 1:
Rebuttal: **General Comment 1:**
**A recurring comment is that the technical contribution of the paper is not substantial enough (e.g., Reviewer gZ3U: “the main result of this work follows rather easily from prior works”).**
**General Answer 1:**
- First, we believe that our trichotomy (Theorem 4.1) is a meaningful conceptual contribution. As one reviewer pointed out, “Such trichotomy has been identified in the recently introduced setting of universal learning [BHMvHY2021] [...] this is the first paper to note such trichotomy in any variant of online learning model”. We feel that this result uncovers an elegant picture of the landscape. We believe it is neat and didactic enough that it could, for instance, be taught in an undergraduate course that covers online learning. The community would benefit from having this published.
- That being said, the proof of the trichotomy does include a technical contribution that is not entirely trivial. Specifically, the lower bound in item 2 of the trichotomy uses the lower bound that follows from the threshold dimension (Theorem 3.3 and Claim 3.4). One reviewer called this a “nice new connection”.
- On top of that, the proof of the multiclass trichotomy (Theorem B.2 in the supplementary materials) involves considerably more technical work. First, it requires identifying the “correct” definition of threshold dimension for the multiclass case (Definition A.2). Next, the proof of the log(n) lower bound (Theorem A.5) uses a result from Ramsey theory (Lemma A.7), which would probably not be the first thing that comes to mind for most people. We did not emphasize this technical aspect in the main body of the paper because we focused on the conceptually-clean message of the trichotomy, but we feel that the appendix constitutes a worthy technical contribution.
- Finally, the proof of our $\log(\mathsf{LD}(\mathcal{H}))$ lower bound (Theorem 3.1) is both non-trivial, and improves on the previous result of Ben-David, Kushilevitz, and Mansour from 26 years ago. As one reviewer wrote, “the proof technique of Theorem 3.1 is novel and differs from standard lower bound techniques in online learning literature.” We maintain that this result does not follow “rather easily” from prior works.
Taken together, we believe that these contributions merit accepting the paper to NeurIPS. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper studies transductive online learning. In this setup, the adversary fixes the sequence of unlabeled instances and reveals labels sequentially following learner's prediction in each round. The paper shows a trichotomy of rates for binary classification: $n$, $\Theta(\log{n})$ and $\Theta(1)$ based on finiteness/infiniteness of VC and Littlestone dimension of the hypotheses class. The authors also extend the result to multiclass setting establishing a similar trichotomy depending on Natarajan and multiclass extension of Littlestone dimension. Finally, the authors provide a quadratic improvement in the known lower bound in the case $\Theta(1)$.
Strengths: 1. The paper identifies the trichotomy of rates in online transductive learning. Such trichotomy has been identified in the recently introduced setting of universal learning [BHMvHY2021]. As far as I know, this is the first paper to note such trichotomy in any variant of online learning model. Qualitatively, the paper shows that VC, instead of more restrictive Littlestone, is necessary and sufficient for learnability of the hypotheses class in transductive online setting.
2. The paper studies a theoretically interesting problem and motivates the problem through a meaningful example. Apart from the proof in Section 6 which I found to be written sloppily, the paper is well-written and easy to follow.
3. The proof technique of Theorem 3.1 is novel and differs from standard lower bound techniques in online learning literature.
[BHMvHY2021] Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, and Amir Yehudayoff. 2021. A theory of universal learning. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (STOC 2021). Association for Computing Machinery, New York, NY, USA, 532–541. https://doi.org/10.1145/3406325.3451087
Weaknesses: My general impression of the paper is that there is a room for a more comprehensive treatment of the material considered here.
1. I might be speaking with a hindsight bias here, but I think most of proofs regarding trichotomy (Theorem 4.1) is straightforward except that of Claim 3.4 (also see my question below). In view of that, I would have preferred to see a sharper analysis of mistake bounds in Items 2 and 3 . Namely: (a) What is the optimal dependence of $\text{VC}(\mathcal{H})$ and $n$ for the case $\text{VC}(\mathcal{H}) < \infty$ and $\text{LD} (\mathcal{H}) = \infty$? (b) Is there an algorithm that exploits the transductive nature of the game to get a mistake bound better than $\text{LD}(\mathcal{H})$ when it is finite?
2. The exposition of proof of Theorem 3.1, arguably the main technical contributions of the paper, can be improved. For instance, what is $j$ in $\varepsilon_t = \varepsilon^{(j+1)}$ in line 267? What is the role of the constant $C_1$ in the proof/result? Line 263 claims we can take $C_1=5$? If so, why not just use $5$ for a clearer presentation (although I dont see why it cant be 2)? What is superscript $1$ in $C_1$ indexing? The paragraph following line 270 (that analyzes the adversary) can also be improved in its clarity.
3. The authors could have considered agnostic setting as well. What is the characterization of learnability for online agnostic transductive learning? Does one observe such trichotomy of rates?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Does Theorem 3.1 imply the $\log{n}$ lower bound in Item (2) of Theorem 4.1 using the fact that $\text{LD}(\mathcal{H}) \geq n$ for each $n$? If so, is the discussion of Threshold dimension and Claim 3.4 required to establish the trichotomy for binary case? What about multiclass extension?
2. In Line 44-45, the authors claim that neither party benefit from using randomness. Do authors mean that there wont be any quantitative difference (up to a constant factor) between deterministic and randomized learning rules? This is most likely true but I would like authors to justify their statement instead of just claiming it.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comment:
I might be speaking with a hindsight bias here, but I think most of proofs regarding trichotomy (Theorem 4.1) is straightforward except that of Claim 3.4 (also see my question below).**
**Answer:** We believe our paper makes some neat and non-trivial contributions, see our **_General Answer 1_** in the Author Rebuttal section.
**Comment**
**The exposition of proof of Theorem 3.1, arguably the main technical contributions of the paper, can be improved.**
**Answer:** Agreed. That proof is clearly less polished than the rest of the paper. In a camera-ready version, we would be sure to iron it out and bring it up to the standard of writing we have shown in the rest of the paper. (The math of the proof is correc)
**Comment:
Does Theorem 3.1 imply the $\log n$ lower bound in Item (2) of Theorem 4.1 using the fact that $\mathsf{LD}(\mathcal{H}) \geq n$ for each $n$ ? If so, is the discussion of Threshold dimension and Claim 3.4 required to establish the trichotomy for binary case? What about multiclass extension?**
**Answer:** This is a good question! But no -- Theorem 3.1 does not imply the $\log(n)$ lower bound in the trichotomy. Suppose that $\mathsf{LD}(\mathcal{H}) \geq d$ for some integer $d$. In the lower bound of Theorem 3.1, the adversary takes a shattered Littlestone tree of depth $d$, presents the nodes of that tree in breadth-first order, and forces $c_0 \cdot \log(d)$ mistakes. However, a tree of depth $d$ has $\Omega(2^d)$ nodes, so the lower bound implies that if the adversary presents $n = \Omega(2^d)$ instances, it can force $c_0 \cdot \log(d) = \Omega(\log\log(n))$ mistakes. Namely, using Theorem 3.1 would yield a lower bound of $\Omega(\log\log(n))$ mistakes, which is exponentially weaker than the $\log(n)$ lower bound we present in Item (2) of Theorem 4.1.
As an aside, we note that beyond this quantitative advantage, our proof of the $\log(n)$ lower bound via the threshold dimension has the additional advantage that it generalizes to the multiclass setting, as we show in the supplementary materials.
**Question:**
**Is there an algorithm that exploits the transductive nature of the game to get a mistake bound better than
when it is finite?**
**Answer:**
This is an open question. The "correct" mistake bound in the $\Theta(1)$ case can be anywhere between $\mathsf{LD}(\mathcal{H})$ and $\log(\mathsf{LD}(\mathcal{H}))$.
**Question:**
**In Line 44-45, the authors claim that neither party benefit from using randomness. Do authors mean that there wont be any quantitative difference (up to a constant factor) between deterministic and randomized learning rules? This is most likely true but I would like authors to justify their statement instead of just claiming it.**
**Answer:**
Agreed. This is not hard, we will include a proof in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your general reply and for answering my specific question. I look forward to reading the polished version of the proof, and I think it would be helpful to include more discussion on proof techniques for multiclass settings in the main text. Given the response by the authors, I am happy to raise the score to 5.
---
Reply to Comment 1.1.1:
Comment: We are thrilled to hear that you are raising your score!!
Per your question on the agnostic case:
**The authors could have considered agnostic setting as well. What is the characterization of learnability for online agnostic transductive learning? Does one observe such trichotomy of rates?**
Please see our **_General Answer 2: The Agnostic Case_** above. We feel this is a valuable addition! | null | null | null | null | null | null |
Low-Rank Learning by Design: the Role of Network Architecture and Activation Linearity in Gradient Rank Collapse | Reject | Summary: The paper provides a comprehensive understanding of gradient rank in deep neural networks (DNNs) and how architectural choices and data structure affect gradient rank bounds. The authors highlight the emergence of low-rank learning as an inherent aspect of certain DNN architectures and propose a theoretical analysis to provide bounds for training fully-connected, recurrent, and convolutional neural networks. They also demonstrate, both theoretically and empirically, how design choices such as activation function linearity, bottleneck layer introduction, convolutional stride, and sequence truncation influence these bounds. The study not only contributes to the understanding of learning dynamics in DNNs but also provides practical guidance for deep learning engineers to make informed design decisions. The authors also discuss the phenomenon of "Neural Collapse," where linear classifiers within DNNs converge to specific geometrical structures during late-stage training, and highlight the role of geometric constraints in learning beyond this terminal phase.
Strengths: This work theoretically analyzes the upper bound of the rank of gradients for different kinds of neural networks. The analysis can offer us great insight when we try to design new modules or activation functions. In addition, the authors provide a detailed practical analysis of the proposed methods.
Weaknesses: For the experiments for validation of the bottleneck, it would be better to add experiments on convolutional neural networks to make the validation more sufficient. For the analysis of the structure, it would be better to add analysis of some normalization techniques or regularization techniques.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please refer to Weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback regarding our work on low-rank learning! We wholeheartedly agree that our approach to analyzing gradient rank can help in designing new modules and activation functions. We believe this positive direction of future work which you insightfully point out further encourages us regarding the importance of our submission to the NeurIps community and to the research community at large!
## Response to weaknesses
### Response to Request for Modern Architectures
We completely agree that our analysis really shines when applied to modern, large-scale architectures! To this end, our original submission did include four large-scale analyses in our supplementary material, including two large-scale image-recognition models (ResNet, VGG) and two large-scale transformers for language modeling (BERT, XLM). We would like to direct you to figures 3 and 4 in our supplementary material which demonstrate some of the effects of sequence length and image size in particular on the rank of gradients in modern, large-scale architectures.
### Response to Extension to Regularization Modules
Thank you for your insightful comment that our analysis would indeed be augmented by an extension to normalization and regularization modules such as Batch Normalization and Layer Normalization. We think that because of the page limitations in this original submission, this would be a really fascinating endeavor for future work! In particular, an extension to mechanisms such as gradient clipping, L2/L1 regularization and other gradient-centric techniques would make for a fascinating body of work which could emerge from our initial work here! | Summary: The authors present an investigative study into the learning dynamics of neural networks, specifically low-rank neural networks. The motivation is that training dynamics of neural networks are not fully understood. Low-rank models have practical advantages (time / memory). This work provides a theoretical and empirical overview of the learning dynamics of neural networks with a focus on low-rank models.
Strengths: The main strength of this paper is its significance, i.e. the attempt to explain learning dynamics of neural networks from both a theoretical and empirical perspective. This is a worthy and valuable endeavour which will be of great interest to the field if done correctly and thoroughly. Unfortunately, there are also significant weaknesses, as we will see, which weight heavily against the strengths.
Weaknesses: There are a number of critical weaknesses present in this paper that severely reduce its value. Perhaps there is a valuable message in there but the presentation, clarity and writing make it impossible to recommend this as an impactful paper. Here are some of the weaknesses detailed:
- Flow and writing can be greatly improved. Very abrupt changes between sections (e.g. from 1 to 2). The sections do not have self-contained introductions to help readers orient themselves and understand the reasons and motivations for choices made.
- Related to the first point, formulae are presented without sufficient discussion. Many sections (notably 2 and 3) read like a series of statements rather than a well-constructed and clearly motivated argument. I encourage the authors to improve the flow to help the reader understand the context
- Formatting can be improved. What is *error*? Overview of results in lines 77-88 can be greatly improved with respect to formatting.
- Missing definition, what is BPTT in line 104?
- Lack of discussion on the connection between the presented theoretical and empirical results.
- Stated contributions are not found in the paper (effect of stride is in supplementary material and I could not find sequence truncation experiments anywhere).
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - Why are only leaky-ReLUs investigated, what is the motivation? And how does this differ from standard ReLUs?
- What is the theoretical connection to bottleneck layers bounding the entire network? Why is this limited to bottleneck layers and not the narrowest part of the network?
- What is the impact of training data on the learnt rank? Does the implicit dimension of the dataset influence the rank? With reference to [1]
- Why is the PDF formatted as an image and not as a standard NeurIPS PDF output? The links to section labels and reference links do not work.
[1] Li, C., Farkhoor, H., Liu, R., & Yosinski, J. (2018). Measuring the intrinsic dimension of objective landscapes. arXiv preprint arXiv:1804.08838.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: The authors discuss only one limitation of their work: the relationship between tasks and low-rank emergence. However, there are a number of limitations that need to be discussed. Not only have the experiments scratched the surface of the possible signals from learning dynamics we can collect and paint us a limited picture of what is happening but the results themselves also make us pose new questions that remain undiscussed. Refer to my questions to the authors for some possible discussion points that would improve this paper greatly and help place it within the existing literature and highlight the most obvious next steps to build on this work for future studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for recognizing the novelty and significance of our work! We appreciate your thorough critical feedback regarding the writing in particular. We have taken your feedback to heart and have worked diligently to ameliorate these issues while still working within page limitations. Particular technical issues such as links in the PDF not working, which surprised us as well, have also been fixed.
We also thank you for bringing up some issues regarding stated contributions which you could not find in the main body of work. As mentioned in the main text, much additional work was included in supplementary material due to page limitations. In our specific responses below we have directed you to the particular figures in our original supplementary material.
## Response to Weaknesses
### Response to Issues of Writing Quality
Thank you for your feedback regarding the overall flow of the work! We wholeheartedly agree that section introductions and motivating context can really help ground the reader. We were forced to cut some of the original language to this effect in order to make the page limit; however, we agree that the clarity of our paper has suffered because of these cuts. We have worked to provide some additional context particularly in the methods sections where we believe this is most helpful in grounding our presentation. We will continue to revise the text for flow and clarity where possible!
### Response to Claim of Results not Included
As far as we have found after review, we did not make any claims that were not included in the main text or the supplementary material. The experiments we performed with differing sequence length are included for the BERT architecture on Wikitext in Figure 4 of the supplementary material, which was indeed included with the original submission. We understand that the language surrounding stride in particular may have been confusing, as our experiment with Image Size in CNNs (also included in the supplement) covers the same underlying mechanism - i.e. the length over the sequence over which parameter tying occurs. We have changed the language in our introduction to make this more apparent.
## Response to Questions
1. Great question! Leaky-ReLUs are equal to ReLUs when the slope of the nonlinearity is 0, so when we increase the level of linearity, we are slowly increasing from a fully linear activation to a fully-nonlinear, standard ReLU. The reason we investigate Leaky-ReLUs in particular is that it gives us a clear theoretical bridge between the fully-linear network space (which has been investigated in many previous theoretical works from Saxe et al. for example) to the more-widely-applied DNNs with nonlinearities. Additionally, because Leaky-ReLUs (and RelUs) are piecewise linear, we are able to provide analytical bounds on their effect on the rank! We do believe that there is room for future work on other nonlinear activation functions; however, the standard machinery of Linear Algebra does not apply to these nonlinear functions. There is some work in functional analysis and nonlinear control theory which we believe may contribute some analogous notions to Singular Values (and thus to rank) for general nonlinear functions. We have included some figures empirically investigating rank effects of other nonlinearities in our rebuttal, and hope this excites you for the future directions of this work!
2. This is a great question! We call the narrowest part of the network a bottleneck. For example in an MLP, the layer with the smallest number of neurons will be one. Inequality 2 in the submission demonstrates how such a bottleneck affects the rank of the entire network. Bottleneck layers at the input and output will also serve as bottlenecks on the entire rank of the network. For example in MNIST classification with a linear network, the rank of the gradients will be limited to 10 throughout unless there is a layer with number of neurons smaller than 10.
3. This is a great question, which we address in a figure (Figure 1) in the supplement! Indeed, the rank of the input will affect gradient ranks - we show with a low-rank embedded Gaussian in the supplement that this will pop up in gradients of linear networks. We have checked the provided reference and although it uses a very different mechanism, we have cited this work when we mention corresponding results.
4. Thank you for pointing out this technical glitch in our submission! We believe this occurred due to a resave of the document outside of the latex software, and we have fixed this in our revised document
## Response to Limitations
While we agree that there are many more implications of our work to explore, we do discuss a number of limitations to our work beyond what the reviewer has pointed out. Our analysis focuses only on Leaky-ReLU and ReLU activations, and further work here is discussed several points throughout the paper. Additionally, we acknowledge that there is ample room for extension to regularization techniques such as Batch Normalization, Layer Normalization and more, which we treat as evidence of potential impact of our work. We have also discussed a possible connection to Neural Collapse which is discussed in a limited way throughout the paper. We realize that more language in the discussion section could be included to reemphasize these points which appear throughout our work, and we have extended our discussion as much as possible while staying within the page limit. Additionally, we would like to direct the reviewer to the supplementary material that includes some additional discussion linking our work to previous literature on ReLU singular values and Neural Collapse. Per your insightful recommendation regarding the detection of rank in input data, we have provided a citation to the work as recommended, and we believe there is potential for further work connecting our work to the study of rank in objective landscapes.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comprehensive response to my review.
I find the author’s proposed changes to be positive for the overall paper. However, I don’t find all my concerns have been alleviated, particularly with respect to many of the critical aspects of the paper which are still slightly “out of view” in the supplementary.
Having said that I feel the proposed changes are significant enough to warrant an improved rating, which I will do by editing my original review rating. | Summary: This paper studies the gradient rank of DNNs and examine how certain design choice, in particular architectural choices of the model and the structure of the data affect the gradient rank bounds. The paper mainly focuses on theoretical results with some empirical experiments to validate their empirical claims. The paper begins by studying the gradient rank of deep linear models before proceeding to explore leak-ReLU networks. They also extended their analysis to consider convolution and recurrent layers.
Strengths: The paper seems fairly novel, as I have not seen anyone explore the gradient rank although I think it is important as it can tell us more about the learning dynamics of neural networks. The theoretical analysis is simple, and this is advantageous in my opinion. One of the strengths of the paper is they have analysed a few neural network variants such as recurrent, convolutional and leaky relu networks. This demonstrates how general their analysis can be. The experiments are good and test well the bounds.
Weaknesses: it is not clear how to generalise the analysis to more complex non-linearities and can thus at most only seem to study them near a limited range of input values (such as near zero) where the network behaves more linearly. The analysis also doesn't look at more modern architectures like transformers.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: DNNs are trained using gradient descent. It is possible to write the parameters of the network as a weighted sum of gradients, since we can bound the rank of the gradients with your analysis can we use that to bound the rank of some of the layers in a DNN.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Authors have adequately addressed and there is no clear potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your positive review of our work! We wholeheartedly agree that one of the strengths of our analysis is the simplicity at its core, which we believe can help lead to better intuition in the growing study of learning dynamics in deep neural networks!
## Response to Generalization to other Nonlinear Functions
Your feedback regarding other nonlinear functions is quite insightful! As you point out,our analysis would not immediately generalize to any nonlinearity that is not piecewise linear. Our primary reason for not taking on general nonlinear activations is that the machinery of linear algebra (particularly the singular value decomposition) does not apply to non-linear functions, and so more mathematical groundwork is needed to motivate any discussion of general nonlinearities. Our work newly recognizes that any piecewise nonlinear function can be analyzed using the standard linear-algebraic machinery; however, to move forward to more general nonlinear functions, new mathematical machinery is needed perhaps drawing on previous work in functional analysis of nonlinear functions, or in control theory of nonlinear operators.
A more precise way of putting this limitation is that our analysis can deal with any nonlinear function that is piecewise linear (even “very nonlinear” functions such as standard ReLUs). What our results show though is that even a small amount of nonlinearity in a leaky-ReLU will result in ncreasing the rank up to an upper bound. We believe that there is an exciting frontier which takes on general nonlinear functions; however, because the groundwork needed more significantly departs from previous work analyzing rank, we believe this is better suited for future work.
We would like to refer you to the figures included as part of our rebuttal for an initial empirical investigation of rank-based phenomena in other nonlinear activation functions! We believe there is much more to investigate here, and our initial empirical findings demonstrate some fascinating behavior that begs for thorough theoretical review!
## Response to Request for Modern Architectures
Regarding your point that our analysis doesn’t look at more modern architectures - we would like to point you to our supplementary material where we do perform empirical studies of large-scale networks including two large-scale CNN-based models for image recognition, and a large transformer model for language modeling.
Our theoretical analysis generalizes quite readily to most of the parts of these models as we do discuss the role of parameter tying in CNNs and RNNs (which includes Transformers). We do leave the analysis of auxiliary modules such as Batch and Layer normalization for future work; however, our empirical results on models which do include batch normalization indicate that our analysis still applies. | Summary: This paper explores the collapse of gradient rank in DNNs during training. Based on the simple linear network, the authors theoretically examinate the rough upper bound of the gradient rank for simple MLP, current network and CNN. Furthermore, they analyze the numerical effect of rank on Leaky-ReLU activation.
To verify their results of the rank bounds, experiments are conducted with on two synthetic datasets and several real datasets (CIFAR10 and Tiny-ImageNet for computer vision and WiKiText for NLP) across pure linear networks and smaller-sized version of popular architectures, e.g., ResNet16, VGG, BERT, etc.
Experimental results show that 1. Linear bottleneck layer architecture reduces the gradient rank; 2. the gradient rank of recurrent networks and CNNs is proportion to the size of sequence; and the negative slope of leaky-ReLU activation is related to the amount of gradient rank restored.
Strengths: The paper demonstrates a high level of originality in its exploration of gradient rank in linear network, recurrent networks and CNN and extra analysis of Leaky-ReLU effect.
Weaknesses: 1. Poor writing quality: This paper has not been properly polished before submission, and there are numerous writing issues that have not been carefully checked. For instance, in line 132, the term "phi" should be "\phi". Additionally, in line 217, the notation "FIGURE1left" should be revised for clarity. In line 218, there are two issues: the terms "FIGURE1right" and "In figure" should be corrected ("in figure"?). Moreover, a missing "." in line 235 needs to be addressed. Another concern is the inconsistent section references throughout the paper, such as "sec 3.2.2" in line 226 and "&3.3.1" in line 236. Overall, this article resembles a draft rather than an official submission, and it requires further polishing and thorough review.
2. Unfollowed style: the bibliography style seems not to follow the provided template and please check your article format, because I cannot search and select your words in the main context and cannot jump to the referred figures.
3. Lack of practical network experiment: although the theoretical analysis is mainly conducted on the linear network and experiments are provided for 3-layer linear MLP and RNN network, more experiments and further analysis on popular networks are needed for providing insight for the understanding the gradient rank in practice.
4 Lack of citation and explanation for BPTT.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Because the paper is not well-written, most question is for clarification.
1. what does the "disjoint variable" mean in line 171?
2. what does the "original bound" mean in line 244?
3. what does the "restore" refer to in line 246-255?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No potential negative societal impact of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank you for your review of our work! We are especially encouraged by your recognition of the originality of our work—we believe that this rank-based approach to investigating learning theory in Deep Neural Networks is extremely promising, and the results we share in this work are an important first step in setting groundwork for further investigations on the role of rank in gradient dynamics. It is our belief that the weaknesses you perceived in our work are in part addressed by referring to results which we did include in the supplementary materials of the original submission, and the writing issues you point out have been easily addressed by thorough editorial review and polish!
## Response to weaknesses
### Response to Issues on Writing Quality
We would like to thank you for your feedback regarding issues with the writing in our original submission! We agree that clear and polished writing is a vital part of communication in science, and we have taken your feedback as a clear indication that this part of our submission needs significant improvement. The particular issues you have correctly pointed out have been fixed in the revised document, and we are currently continuing to review the submission for any further issues with the writing. Additionally, we have fixed the technical issue with the PDF which was preventing links in the bibliography to work.
### Response to Issues with Bibliography Style
Thank you so much for pointing out the technical issue with our submission! Although we did use the correct template*, we believe a resave of the document during the revision process somehow broke links to the bibliography and figures. This is fixed in the revision which we have uploaded here, and we thank you again for bringing it to our attention!
Regarding the citation style - we did use the provided NeurIps template; however, we would like to refer to the style guide regarding our choice of citation style: "Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references." From the style guide https://media.neurips.cc/Conferences/NeurIPS2023/Styles/neurips_2023.tex
### Response to Perceived Lack of Large-Scale Experiments
Thank you for your feedback and concern that you did not see any large-scale experiments in the main text of the paper. We are of the same mindset that such large-scale experiments are important, and indeed we included several large-scale analyses on modern architectures as part of the original submission! In the second-to-last paragraph of our Empirical Methods section (section 4 — lines 198-207) we outline the experiments which were performed, which include two popular image-recognition data sets (Cifar10/Tiny-ImageNet) and a large-scale language modeling data set (WikiText). We implemented two popular image-recognition CNN (ResNet16/VGG11) and language-modeling transformer (BERT, XLM) architectures. Because these architectures are quite large, and the results simply reinforce the proof-of-concept results in the main text, we included the results as part of our supplementary materials. Because NeurIPS requires separate documents for the main text and supplementary materials, we cannot link to the figures directly; however, we have made the references to these experiments more clear. We would encourage the reviewer to look through the supplementary material, as we believe these results do indeed help to reinforce the importance of our work!
### Response to Citation for BPTT
Thank you for catching our omission of the full phrase “Back-Propagation through Time”! We have added the full phrase as should have been originally included! While there are several possible candidates for citing such a popular optimization paradigm, we have elected to provide a citation Mozer et al. 1995 which is one of the foundational works on BPTT.
## Response to Questions
1. We believe this reviewer is referring to the “Adjoint Variable” on line 171, rather than “disjoint” which we cannot find anywhere in the text. The adjoint variable is the name given to the statistics during the backward phase of Reverse-Mode Auto-Differentiation, which in this case is the partial derivative on layer output. This is defined in the “Theoretical Methods” section after line 99, and is a commonly-used term in the Auto-Differentiation literature.
2. The “original bound” here refers to the boundary under which singular values numerically computed from the output of a leaky-ReLU activation will no longer contribute to the rank. We agree that this language is confusing, and we have changed the text to refer to this as the “rank computed empirically from the output of the activation”. This better demonstrates that our theoretical result is able to exactly predict this bound which previously could only be computed using numerical methods.
3. In this paragraph, when we say a given nonlinearity is or is not “restoring the rank” we mean to say that the resulting output is no longer as rank-deficient as the original matrix. Therefore, when we say that ReLU activations “do not fully restore gradient rank” we simply mean that the resulting output matrix is still not full-rank. We hope this has clarified what was meant here, and we have changed the wording to “restoring to full rank” to hopefully make this more clear to future readers. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for taking the time to critically evaluate our submission on Low-Rank Learning in Deep Neural Networks! All four of the reviewers recognized the novelty and importance of the work, and we are encouraged by this positive feedback! Our analysis provides important groundwork for analyzing the role of gradient rank in deep learning, and we believe the submission would be important to the NeurIps Community at large.
Much of the critical feedback in this process has focused on technical issues with the writing in the original document, including some broken links, typos, obfuscated language, and need for thorough polishing. We have fixed all specific issues pointed to by reviewers, and are continuing to thoroughly review our manuscript to better increase clarity and improve overall flow. The paper has already been improved significantly by reviewer feedback, and we believe any remaining issues can be easily fixed for the camera-ready version.
All four of our reviewers brought up concerns that our submission lacked large-scale experiments; however, as we mention in the specific responses, four large-scale, modern architectures were included as part of the original submission, with several paragraphs in our methods and results referring to their inclusion as such. Space limitations demand that these figures remain as part of the supplementary material; however, we are thankful for this feedback as it has allowed us to better highlight the supplementary material in the main text with clearer language and more direct reference. In general, we believe that **all critique regarding content is addressed by referring to the supplementary materials** included as part of the original submission.
One fascinating question brought up by several reviewers asks about the extension of this work to general nonlinear functions. Our analysis breaks new ground in deep learning theory in its ability to apply to any nonlinear activation which is piecewise linear; however, we wholeheartedly agree that an extension to general nonlinear functions would be a significantly useful extension for future work. The primary reason for omitting that from this submission is that the standard machinery of linear algebra does not apply to general nonlinear functions, and thus significantly more mathematical groundwork is needed to investigate the connection to rank.
Despite our reasons for not pursuing other nonlinear activations as part of this submission, we do believe however, that our submission here is important first step in this future direction. As part of this rebuttal we have included a figure showing how different nonlinearities affect the magnitude of the singular values of a given low-rank linear transformation. In this figure, each row represents a different activation function, and each column $k$ represents the magnitude of the $k$th singular value as a function of the magnitude of the top 2 singular values. For this demonstration, we use a transformation with an underlying rank of 2, and we see that different nonlinear activations affect the spectrum quite uniquely as functions of the magnitude of the original top 2 eigenvalues. We hope this demonstrates how fascinating but subtle this rank-based analysis of nonlinear activation can be, and reinforces the importance of our work as an initial step in a fruitful direction for future research.
Overall we are encouraged by the feedback from this team of reviewers! All of these reviewers acknowledge the originality and importance of the work, any issues with content are mitigated by reference to supplementary material. We have work to make references to the supplement more clear in the main body of text to aid future readers. We have fixed all specific issues with writing brought up by the team of reviewers, and we are continuing to polish the composition so that the communication of our work is clear for the NeurIps and research communities at large.
Thank you to the reviewers for your thorough feedback! Your critique and questions have significantly improved the quality of our submission, and we are now even more confident in the value this work would have to the research community!
Pdf: /pdf/6af46d1dddf785a8e7453b96a06eeeb4fab895f6.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Uncertainty-Aware Alignment Network for Cross-Domain Video-Text Retrieval | Accept (poster) | Summary: This paper focuses on text-video retrieval for the challenging unsupervised domain adaptation setting. For this problem, the model is trained on a source only set of supervised video/text pairs and is adapted to a target domain which consists of videos and text labels with no pairing ground truth. The proposed method, named Uncertainty-aware Alignment Network (UAN), uses the assumption that many videos can correspond to a textual item and vice-versa, which allows for the multi-granularity modelling. They evaluate their method on three different pairs of datasets between MSR-VTT, MSVD, and TGIF. Their method is found to outperform all other unsupervised domain adaptation methods for video retrieval because they adapt classifier domain adaptation approaches.
Strengths: The experiments that have been run are convincing and thorough and even include evaluation of the method for image-text domain adaptation which is nice to see and proves that the proposed approach was created from the ground up for embedding spaces instead of borrowed from unsupervised domain adaptation approaches for classification. As a final note, the experiments were all run 5 times and averaged for robustness.
The proposed model works well for unsupervised domain adaptation video retrieval and makes sense to break from the one-to-one assumption during the training to better find potential positives.
Weaknesses: What are the values of a and b given in equation 7/8 and how are these chosen?
Equations 10 and 11 could be better introduced/explained, additionally, it's easy to miss that T and K refer to the batch and for an individual example respectively.
The discussion on failure cases of the model within the paper is rather short. There is one given failure case within figure 6 which is very briefly explained (so much so that I'm not sure how the model failed on this case). I think these could be better highlighted as another future avenue for related work and better understanding of the method as a reader.
The method calls for many-to-one and one-to-many relationships between the two modalities during training, but for evaluation it is assumed that there is only a one-to-one relationship between modalities as with previous work. There has been some work that mentions the possibility of many-to-many relationships during training for images [a] and videos [b] and it might be worth discussing this point within the limitations which also mentions the one-to-many relationships at the early training stage.
[a] Chun, Sanghyuk, et al. "Probabilistic embeddings for cross-modal retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[b] Wray, Michael, Hazel Doughty, and Dima Damen. "On semantic similarity in video retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
Other Comments
Line 6: missing space between comma and 'as'
[24] and [25] are duplicated citations
Line 142: inner production -> inner product
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. Can the failure cases of the method be expanded and more information given regarding the failure case in Figure 6
2. What are the values of a and b given in equation 7/8 when training the model and how are these chosen?
3. With the domain adaptation requiring the one-to-one relationship between video and text during evaluation to be relaxed into a many-to-many relationship during training, do you think this has a method on overall method performance?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations are discussed within the paper in the final paragraph, however, not much is said for the method beyond selecting pairs in the first few epochs likely causes issues. I think this could be expanded further to round out the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the positive feedback, pointing out that our method was created `FROM THE GROUND UP FOR EMBEDDING SPACES'! We will revise the typos in the final version.
**Question1**: values of a and b in Eq. 7 and symbols in Eq. 10-12.
**Response1**: In Eq. 7-8 of D-VLDA, a is a negative scale factor since similarity is inversely proportional to the distance, and b is a shift value.
In Eq. 10-12, ${S}^{\mathcal{V} \rightarrow \mathcal{T}}$ denotes the calculated similarities of target $v_{i}^t$ with all the target texts and similar definition for ${S}^{\mathcal{T} \rightarrow \mathcal{V}}$.
For all our experiments, we set a to -0.005 and b to 6. We will add the discussion and ablation of parameters a and b in the Experimental section.
**Question2**: discussions of failure cases.
**Response2**: Fig. 6 shows that ours achieves consistently best results compared with other SOTAs. We also report a failure case where 'There is lady is riding through the sea waves'. This is acceptable and can be owed to (1) the low visual quality and short length of this video; (2) the confusion of the front object 'lady' and the background 'sea waves'. One possible reason for the confusion is that the 'lady' target is too small and may be 'swallowed' by the `waves'. As to the future work, we will further add explanations to make it easier for readers to understand our work.
**Question3**: one-to-many in training vs. one-to-one in testing.
**Response3**: We consider the training with a more relaxed one-to-many relationship, which is rather straightforward since one video may have high similarity with more than one text. As for 'one video to many texts', this is widely existed in video-text retrieval datasets. While for 'one text to many videos', this is also intuitive that a textual sentence ($t_1$) may sometimes describe the contents of two similar videos ($v_1$ and $v_2$), even if the ground-truth pair is ($v_1$, $t_1$). This has been empirically identified in previous works like [a] and [b].
The insight of our method is to fully exploit this one-to-many relationship during training for a better video/text embedding, and this contributes to the overall retrieval performance in target domain.
[a] Bogolin, Simion-Vlad, et al. "Cross modal retrieval with querybank normalisation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[b] Wray, Michael, Hazel Doughty, and Dima Damen. "On semantic similarity in video retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying questions from my initial review. I am still in favour of this paper after reading the other reviews and responses and am still leaning towards acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely reply and positive feedback.
We will revise the final version according to these constructive discussions.
Thanks again and feel free to inform us for any further questions and discussions. | Summary: The paper tackles the task of unsupervised domain adaptation for text-video retrieval. The authors propose an Uncertainty-aware Alignment Network (UAN), which exploits the semantic information of both modalities in target domain. Specifically, in order to tackle the one-to-many relationships in the target domain, the proposed Uncertainty-aware Alignment Mechanism (UAM) tries to utilize the multi-granularity relationships between each target video and text to ensures the discriminability of target features. Finally, the authors test their proposed method on several benchmarks.
Strengths: The paper tackles an important problem and obtains good results. The results can be of high interest for the research community in order since the method seems to be relatively easy to combine with other existing methods, however I think some more clarifications are needed on this front.
Weaknesses: I think some parts of the paper need more explanations (see questions below). Also, for completeness, I think it would be useful to also briefly describe what happens at inference.
Missing papers in text-video retrieval related work
[1] Gorti, Satya Krishna, et al. "X-pool: Cross-modal language-video attention for text-video retrieval." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Bogolin, Simion-Vlad, et al. "Cross modal retrieval with querybank normalisation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[minor] typo line 155 "to to"
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How are the results for CE reported for text-image retrieval since as far as I know the method only reports results on text-video retrieval? Do you re-train the method on images? What input features do you use for images?
2. Same question for methods in Tab 5? Do you evaluate using the published weights? Re-train? Can an already trained model be adapted to a different domain without re-training? More details are needed
3. Can you give more details on what is needed to combine the proposed method with other existing methods? What's the computational overhead? Are there any additional costs during inference? etc.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are discussed thorough the paper and societal impact is also discussed briefly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the positive feedback. We will add the missing papers in the final version.
**Question1**: details of training CE method for image-text retrieval task.
**Response1**: For a fair comparison, we use the same settings for UDA image-text retrieval in the follow paper ACP [a]. Specifically, we use the object, scene and face features in the image to re-train the model in the CE [b] method.
[a] Y. Liu, Q. Chen, and S. Albanie. Adaptive cross-modal prototypes for cross-domain visual-language retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14954–14964, 2021.
[b] Y. Liu, S. Albanie, A. Nagrani, and A. Zisserman. Use what you have: Video retrieval using representations from collaborative experts. arXiv preprint arXiv:1907.13487, 2019.
**Question2**: details of the combination of our method with existing methods.
**Response2**: In Tab. 5, we combine the proposed method with other methods by re-training the original methods according to their source codes. For example, the method of `HGR + UAN' is obtained by re-training HGR with the addition of our D-VLDA and UAM modules. This leads to a fair comparison.
**Question3**: details of computation overhead and inference procedure.
**Response3**: During the training phase, the calculation of pair similarities takes about 0.006s for each batch, which is reasonable and acceptable due to the dataset scale and batch size. In the inference stage, we directly use the model trained on the source domain and test it on the target domain directly, which does not involve any computational overhead.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. If the authors will add the clarifications in the final version of the paper, I tend towards acceptance.
---
Reply to Comment 1.1.1:
Comment: Thanks for your timely reply and positive feedback.
We will update these valuable discussions and clarifications in the final version.
Thanks again and feel free to inform us for any further questions and discussions. | Summary: This paper proposed a method named Uncertainty-aware Alignment Network (UAN) to address the one-to-many issue which means in the real scene, one text usually corresponds to multiple videos and vice versa. Specifically, the proposed method achieves a new state-of-the-art in Cross-Domain Video-Text Retrieval through the incorporation of design elements such multi-modal mutual information module and Uncertainty-aware Alignment Mechanism (UAM).
Strengths: 1. The motivation for addressing “one-to-many in target domain” is clearly presented and validated.
2. The ablation study and generalization experiment in this paper is comprehensive and clearly analyzed, providing strong evidence of the validity of the proposed model.
Weaknesses: 1. It is recommended to add baseline performance in Fig 4 to reflect the robustness of the proposed module.
2. The authors used each batch in the Uncertainty-Aware Alignment Mechanism to obtain self-discovered matching pairs instead of all data, so will the different number of batch have a significant impact on the effectiveness?
3. The phrase "the confusion of the lady and sea waves" mentioned in the Qualitative Results section is ambiguous. Further analysis is recommended in this section to clarify this point.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the positive feedback.
**Question1**: add baseline in Fig. 4.
**Response1**: Thanks for your suggestion, we will add baseline performance in Fig. 4 to reflect the robustness of the proposed module in the camera version of the paper.
**Question2**: ablation on the number of batch size.
**Response2**: Thanks for the advice. This is a very interesting ablation study, and we report the results as follows. As can be seen, when the batch size is from 16 to 64, the performance tends to increase. However, the performance of the batch size from 64 to 256 tends to be stable. This interesting ablation is unexplored before, and we argue that this may be partially due to that small batch size will exclude some ground-truth pairs, while large batch size will lead to the computational overhead instead of more included positive pairs. We will add discussion and analysis in the camera version.
|**Batch** | | Tf->Mt | | | Mt->Tf | |
| :----: | :----: | :---: | :---: | :---: | :---: | :---: |
| | R@1 | R@10| MR | R@1 | R@10| MR |
| 16 | 4.22 | 22.36 | 79| 7.41|32.03|33
| 32 | 5.62 | 25.56 | 45| 8.36|29.61|41
| **64** | **6.12** | **27.23** |**40**| **9.16**|**31.06**|**37**
| 128 | 6.05 | 27.13 | 40| 9.04|30.87|37
| 256 | 6.03 | 27.08 | 40| 9.01|30.72|37
**Question3**: the phrase "the confusion of the lady and sea waves".
**Response3**: Fig. 6 shows that ours achieves consistently best results compared with other SOTAs. We also report a failure case where 'There is lady is riding through the sea waves'. This is acceptable and can be owed to (1) the low visual quality and short length of this video; (2) the confusion of the front object 'lady' and the background 'sea waves'. One possible reason for the confusion is that the 'lady' target is too small and may be 'swallowed' by the `waves'. As to the future work, we will further add explanations to make it easier for readers to understand our work. | Summary: This paper proposed a unsupervised domain adaptation method tailored for video-text retrieval by exploring the one-to-many correspondences between video and text on the target domains. The proposed method is shown effective and superior to existing approaches which considered only the one-to-one video-text matching relationships on the unlabelled target domains.
Strengths: + The idea to learn from the one-to-many matching relationships among video and text in real-world scenarios is intuitive, the proposed model is straighforward
+ The proposed method not only yielded impressive performance on video-text retrieval, the authors further demonstrate its beneficial to image-text retrieval. Moreover, the authors also proved that the proposed idea is generally beneficial to the task by combining it with a wide range of existing methods.
Weaknesses: + The motivations are not sufficiently discussed. The two main motivations claimed in the paper is (1) existing UDAVR approaches are mostly derived from classification based DA methods which is not optimal for retrieval tasks and (2) existing methods assume a one-to-one matching relationships between video and text. For (1), it may be more straightforward to spell out the major drawbacks of adapting classification DA methods to UDAVR rather than just stating it is suboptimal. The motivation (2) should be justified in terms of whether the one-to-many matching relationships are common on the adopted training datasets. It will be more intuitive if this can be quantified.
+ I'm not sure I fully understand what is the meaning of "to balance the minimization of domain shift" at L12
+ As stated at L36, the definition of UDAVR is that there is no identical textual labels on the two domains. However, two sentences even from the same domains are not likely to be exactly identical but they can describe the same events using a similar vocabulary. Therefore, does it make more sense to measure how different are two video-text retrieval datasets by the overlaps of their vocabularies?
+ In Fig.1, I failed to find the differences between existing methods which are claimed to be classification-based and the proposed method designs for retrieval tasks (L55)
+ In Eq.(7)-(9), as a "negative scale factor", $a$ seems to be a constant, then how about $b$? is it also a constant? If yes, then why adding a constant in a loss function to be optimised?
+ Are the symbols $S^{T\rightarrow V}$ and $S^{V\rightarrow T}$ used in Eq.(10) and Eq.(11) defined?
+ minor problems (typos and etc):
- duplicated "to" at L12, L155
- "similarity Top-K" in Eq.(12)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: + It will be interesting to see how many discovered one-to-many matching relationships are true and consistent with the manual labels on a validation set.
+ In Fig.4, the model obtained its best performance when K=2, does this mean that each video is optimally to be matched with two text sentences? This seems to be applied to all the pairwise combination of datasets but it is not intuitive why a 1-to-2 mapping is always the best. adding more discussion and analysis might provide more insights.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes, the author discussed one of the limitations of their proposed method is the false positive inter-sample relationships discovered for training, especially at the early stage.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the very constructive feedback. We will revise the typos in the final version.
**Question1**: drawbacks of directly adapting classification-based DA methods in UDAVR task.
**Response1**: Previous approaches are mostly derived from classification based domain adaptation methods, which are neither multi-modal nor suitable for retrieval task.
Intuitively, (1) classification-based DA methods are designed for single-modality (like image classification) task, which can't be directly applied in multi-modal setting (UDAVR task); (2) their main assumption is the common label set for source and target domains, which still can't be satisfied in UDAVR task. Empirically, in Tab. 4, we conduct extensive ablation studies with several state-of the-art domain adaptation methods. The proposed D-VLDA achieves significant improvements compared with classification-based and conditional alignment DA methods, which indicates that the D-VLDA module plays an essential role in the UDA video-text retrieval task.
**Question2**: the commonness of one-to-many.
**Response2**: Thanks for the constructive advice. We conduct ablations to verify this as follows. We randomly select 1,000 video-text pairs and calculate the semantic similarities between each two texts (with details in [a]). We then define one video and one text as a `one-to-many' pair if the corresponding similarity is above the threshold. Results can be found below. The one-to-many relationship contributes to the selected target pairs for training and the overall performance.
|**Dataset**|S=1.0 | S>0.9 |S>0.8 | S>0.7 |
| :----: | :----: | :---: | :---: | :---: |
|MSR-VTT|1,000|1,232|1,398|1,478|
|MSVD|1,000|1,268|1,426|1,520|
|TGIF|1,000|1,198|1,306|1,406|
[a] Wray, Michael, Hazel Doughty, and Dima Damen. "On semantic similarity in video retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
**Question3**: the meaning of "to balance the minimization of domain shift".
**Response3**: Different datasets usually have inconsistent data distributions and representations, thus leading to the domain shift problem.
To alleviate this, we propose D-VLDA module to balance the minimization of domain shift on the source and target domains.
By `balance' we refer to the dynamically minimized domain gap designed for multi-modal setting.
**Question4**: measure the difference of two video-text retrieval datasets by the overlaps of their vocabularies.
**Response4**: Thanks for the advice. We admit that in some video-text retrieval datasets, one video may have high similarity with more than one text. For instance, in MSR-VTT, each video corresponds to 20 texts. Besides, the video may also have relatively high similarity with other texts (which belong to other videos). However, we argue that in UDAVR task, the domain gap mainly denotes that there exist large differences like video sources (cartoons, movies, sports etc.), text sources (short sentences, movie dialogues, cooking instructions etc.), video/text lengths and so on. THIS defines the uniqueness of UDAVR task.
**Question5**: illustration of Fig. 1.
**Response5**: Thanks for your suggestion, we will further optimize our Fig. 1 in the camera version.
In Fig. 1, existing works are mostly derived from classification based domain adaptation methods to align different domains.
However, we propose Distribution-based Vision-Language Domain Adaptation, called D-VLDA, to relieve the divergence of domain statistics, thus the distribution shifts can be significantly diminished, improving the generalization of learned model on out-of-distribution target domain.
**Question6**: details of Eq. 7-8.
**Response6**: In Eq. 7-8 of D-VLDA, a is a negative scale factor since similarity is inversely proportional to the distance, and b is a shift value. For all our experiments, we set a to -0.005 and b to 6. We will add a discussion of parameters a and b in the Experimental Details section.
**Question7**: why a 1-to-2 mapping is always the best.
**Response7**: By '1-to-2' relationship we denote that during training time, one video can be considered to be `paired' with more than one text (the ground-truth one) and vice versa. We claim that training with one-to-many benefits the target retrieval performance even when evaluating with one-to-one setting. We will further clarity this in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks again for your constructive reviews and we are looking forward to further discussions.
As to the "one-to-many" issue, which is also mentioned in Reviewer NzVt, we add the following clarifications as further response.
**Q**: how many ground-truth pairs are among the newly found pairs by top-2 UAM
**A**: We show the results of setting TGIF dataset to MSRVTT dataset in Table 1.
For instance, 3/18(16.66\%) denotes 18 pairs are selected out, of which 3 pairs are ground-truth pairs, forming a ratio of 16.66\%. Clearly, UAM selects out 6 more pairs, of which 2 are ground-truth ones.
Besides, as training proceeds, the ratio of ground-truth pairs with UAM also increases and are consistently larger than that of DAC.
This is reasonable and consistent with our intuition that more selected pairs (including 2/6 ground-truth pairs and 4/6 pseudo aligned pairs with high similarities) will benefit the retrieval performance in target domain.
|**Epoch**|20 | 40 |60 | 80 | 100 |
| :----: | :----: | :---: | :---: | :---: | :---: |
|UAN(W/ DAC)|3/18(16.6\%)|8/23(34.78\%)|10/24(41.66\%)|11/28(39.28\%)|13/31(41.93\%)|
|UAN(W/ UAM)|5/24(20.83\%)|12/32(37.5\%)|15/34(44.11\%)|17/37(45.94\%)|21/34(48.33\%)|
|Newly selected|2/6(33.33\%)|4/9(44.44\%)|5/10(50\%)|6/11(54.54\%)|8/13(61.53\%)|
Thanks again and feel free to inform us for any further questions and discussions.
---
Rebuttal Comment 1.2:
Comment: I thank the authors for their efforts in rebuttal. I have checked their responses as well as the comments from other reviewers carefully. Some of my concerns have been addressed. However, I still believe that the key assumption held by this method should be further verified to make the paper stronger. The authors provided additional statistics to show that there are around 20% (with a threshold of 0.9) more video-text pairs are matched in semantics but overlooked in the manual annotations. The key question here to me is how will this affect existing UDAVR solutions, which is under-explored. For example, if we dropped all the video/text involving one-to-many matching relationships on training data, how will this affect the proposed method or existing models that are claimed to suffer from the assumption of one-to-one matching? Such an investigation can be helpful in explaining why the proposed ideas are effective. I also agree with Reviewer#NzVt that additional examples of the mined video-text pairs which were initially missing in the manual labels will help with understanding, as well as the discussions about whether such one-to-many relationships should be considered in the source domains or testing.
In summary, although the proposed method is intuitive and effective to me as commented pre-rebuttal, I think the most critical part about the assumption still needs to be improved. So I would keep my initial rating. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their efforts, detailed reviews and interest for our submission. We will integrate all their remarks in the revised version of the paper. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the unsupervised domain adaptation video-text retrieval problem by two proposed components, the D-VLDA (Distribution-based Vision-Language Domain Adaptation) and the UAM (Uncertainty-Aware Alignment). The proposed D-VLDA aims at alleviating the domain discrepancy via moment-based method, while the UAM method aims for better pseudo labeling.
Strengths: (+) This article is well written, and the description of the method pipeline is clear and easy to understand.
(+) Experiments are sufficient and ablation studies show the effectiveness of the proposed components.
(+) Stat-of-the-art results are demonstrated on several datasets.
Weaknesses: (-) Unfair comparison in table 3, see in Question(2).
(-) The "one-to-many" assumption that underpins the article lacks empirical evidence supporting its widespread occurrence.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (1) The term "(moment-based) D-VLDA module" appears to be referred to as "multi-modal mutual information module" in the abstract. Could the author please clarify the relationship between these two concepts?
(2) In the design of Uncertainty-Aware Alignment(UAM), the authors expand the top-1 dual alignment method from [A] to a top-k dual alignment. It appears that Table 3 aims to validate this expansion, where "UAN(w/DAC)" denotes the application of the top-1 dual alignment consistency mechanism to pick those most similar videos and texts in the target domain, as described in [A]. However, I note that the results for "UAN(w/DAC)" match those reported in [A], while [A] utilizing a different discrepancy loss from D-VLDA. This may lead to an unfair comparison. I would suggest the authors control the variables to ensure a more equitable comparison.
(3) The authors propose the uncertainty-aware alignment network based on the "one-to-many" assumption: one text can describe multiple videos and vice versa. This forms the fundamental premise of the paper. However, the frequency of the "one-to-many" phenomenon remains unclarified. I'm concerned that the efficacy of the method may not be primarily due to the "one-to-many" assumption, but potentially other factors like the top-K methods introduce more true positive pairs in the early training stage, providing the model with more supervision about the target domain. I think the author should verify the frequent existence of the "one-to-many". For example, collecting the top-K dual alignment results by the final model, removing the pairs that belong to the ground truth pairs, and then see the proportion of the remaining pairs to all the ground truth pairs, and also estimate how many of the left pairs match correctly (belongs to the "one-to-many").
[A] X. Hao, W. Zhang, D. Wu, F. Zhu, and B. Li. Dual alignment unsupervised domain adaptation for video-text retrieval. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our paper and for the positive feedback.
**Question1**: unfair comparison in Tab. 3.
**Response1**: Thank you for pointing out this, and the corrected experimental data is as follows:
|**Method** | | Tf->Mt | | | Mt->Tf | |
| :----: | :----: | :---: | :---: | :---: | :---: | :---: |
| | R@1 | R@10| MR | R@1 | R@10| MR |
| UAN(w/ DAC) | 5.43 | 25.63 | 48| 8.44|29.12|42
| UAN(w/ UAM) | 5.76 | 26.12 | 43| 8.73|29.84|40
| **UAN(full)** | **6.12** | **27.23** |**40**| **9.16**|**31.06**|**37**
**Question2**: "(moment-based) D-VLDA module" vs. "multi-modal mutual information module".
**Response2**: Thanks for the advice. We claim that by 'multi-modal' we refer to the contribution of our proposed domain alignment module, which is the first in UDAVR to tackle the domain gap with the consideration of different modalities. While `moment-based D-VLDA' denotes the specific technique we adopted (minimizing the domain gap in a distributional manner). We will align these terms for more clarity in the final version.
**Question3**: the frequent existence of the "one-to-many".
**Response3**: Thanks for the constructive advice. We conduct ablations to verify this as follows. We randomly select 1,000 video-text pairs and calculate the semantic similarities between each two texts (with details in [a]). We then define one video and one text as a `one-to-many' pair if the corresponding similarity is above the threshold. Results can be found below. The one-to-many relationship contributes to the selected target pairs for training and the overall performance.
|**Dataset**|S=1.0 | S>0.9 |S>0.8 | S>0.7 |
| :----: | :----: | :---: | :---: | :---: |
|MSR-VTT|1,000|1,232|1,398|1,478|
|MSVD|1,000|1,268|1,426|1,520|
|TGIF|1,000|1,198|1,306|1,406|
[a] Wray, Michael, Hazel Doughty, and Dima Damen. "On semantic similarity in video retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
---
Rebuttal 2:
Comment: I thank the authors for their careful replay. My first two questions have been addressed. I still concern about the basic assumption the frequent existence of "one-to-many" as mentioned in my question (3). I can see the same concern from Reviewer muWC.
In my understanding, the Uncertainty-Aware Alignment Mechanism module can be seen as a pseudo-label generator that is used to find matching video text pairs in the target domain. Even if the "one-to-many" assumption does not hold, it seems that the “top-2 UAM” algorithm helps to find more true-positive pairs (albeit with a potential for increased noise), thus allowing the model to learn more on target domain, while the basic “top-1 UAM” algorithm results in many true-positive pairs being filtered out.
(1) In the authors' reply, the authors calculate the similarities among 1000 text and pick more than 200 new pairs whose similarities are larger than 0.9. Can the authors show some pairs they found. By the way, it is different from how UAM finds multiple text pairs belonging to the same video, so I am still confused about how many new pairs will be found by the UAM. Top-2 UAM algorithm will obviously finds more pairs than top-1 UAM, so in the newly found pairs, how many of them belongs to the ground-truth pairs.
(2) If "one-to-many" exists frequently in the datasets, does that mean the UAM should also be applied in the source domain.
The Uncertainty-Aware Alignment Mechanism module does show effectiveness. My concern relates to the author's explanation about why it is effective.
---
Rebuttal Comment 2.1:
Comment: Thanks for your timely reply and constructive feedback.
We clarify your concerns as follows.
**Q1**: how many ground-truth pairs are among the newly found pairs by top-2 UAM
**A1**: We show the results of setting TGIF dataset to MSRVTT dataset in Table 1.
For instance, 3/18(16.66\%) denotes 18 pairs are selected out, of which 3 pairs are ground-truth pairs, forming a ratio of 16.66\%. Clearly, UAM selects out 6 more pairs, of which 2 are ground-truth ones.
Besides, as training proceeds, the ratio of ground-truth pairs with UAM also increases and are consistently larger than that of DAC.
This is reasonable and consistent with our intuition that more selected pairs (including 2/6 ground-truth pairs and 4/6 pseudo aligned pairs with high similarities) will benefit the retrieval performance in target domain.
|**Epoch**|20 | 40 |60 | 80 | 100 |
| :----: | :----: | :---: | :---: | :---: | :---: |
|UAN(W/ DAC)|3/18(16.6\%)|8/23(34.78\%)|10/24(41.66\%)|11/28(39.28\%)|13/31(41.93\%)|
|UAN(W/ UAM)|5/24(20.83\%)|12/32(37.5\%)|15/34(44.11\%)|17/37(45.94\%)|21/34(48.33\%)|
|Newly selected|2/6(33.33\%)|4/9(44.44\%)|5/10(50\%)|6/11(54.54\%)|8/13(61.53\%)|
**Q2**: why not apply "one-to-many" in the source domain
**A2**: This is a very constructive suggestion.
Indeed, as stated in this paper, "one-to-many" is widely existed in video-text retrieval datasets, which also can be (or should be) tackled in the source domain.
However, we intuitively argue (which has also been empirically verified) that solving it in source domain would not contribute to the performance gain in target domain, even if it may benefit the source domain performance.
FOR INSTANCE, simply applying top-2 UAM in source domain (TGIF dataset) has a R@1 gain from 8.1 to 9.5, while merely leading to a slight increase on target domain (MSRVTT dataset) from 6.12 to 6.14.
This is also somewhat acceptable and reasonable due to large domain gap and modality complexity.
We will update the full experimental results in the final version.
Thanks again and feel free to inform us for any further questions and discussions. | null | null | null | null | null | null |
Agents Explore the Environment Beyond Good Actions to Improve Their Model for Better Decisions | Reject | Summary: Model-based planning agents such as MuZero leverages a learned model of the environment dynamics to learn a policy to follow in the actual environment. By planning using the learned model dynamics, the agent may generate a stronger policy than when limited to only experience from the real environment. The drawback to this approach lies in the agent being forced into decision states it has not seen before. The agent's model will have incorrect value predictions for these states and the planning stage will result in weaker or sub-optmal policies in these states. This paper proposes an exploration scheme that ensures the agent explores sub-optimal regions of the search space to build better value estimates of the states within.
Strengths: **Originality:** The ideas introduced in this paper have not been published before as far as I can tell.
**Significance:** The algorithm proposed by the authors is nominally significant. It is a naive implementation of the explore-first-then-exploit type of algorithms.
**Clarity:** The writing itself of the paper was fair but could use considerable improvement.
**Quality:** It is hard to assess the strengths in the quality of this paper.
Weaknesses: First off, I think it needs to be said that this is not a paper that can be improved upon with changes in the writing or presentation. The algorithm introduced is only a minor change to an already existing algorithm and it provides very little new insight.
That being said, I do want to encourage the authors to continue learning and working in the field. For the authors, I would recommend attending workshop or course on technical writing and building a stronger background in RL and planning.
For the submission specifically, I have a number of critiques. However, I will touch on only a few because as I've said this is not a paper that I think can be reworked and resubmitted:
1) The introduction of the paper goes well into page 3. A strong problem statement and motivation section can go along way in emphasizing the importance of the work but the paper needs to focus more on the actual contributions.
2) A similar note on the background and related works section. A lot of it deals with the history of the body of the work and not their relation to this particular paper. Furthermore, the background section does not provide the actual background needed for the paper. A good background section typically formalizes the problem state; clarifies any assumptions being made; defines notation and how the problem is being modelled; and provides a primer on the literature specifically needed to understand the rest of the paper. This submisison fails in those regards.
3) It is difficult to understand the algorithm introduced because nothing has been defined.
4) The experiments are limited a single small domain and does not give any indication of how the approach scales to larger ones.
5) The experiments do not provide comparisons to other baselines, that is, to other algorithms that take the explore-first approach which are, in my opinion, a direct competitor to proposed algorithm.
Technical Quality: 1 poor
Clarity: 1 poor
Questions for Authors: 1) In equation 4, should this be the definition of $P_{normal}$ or are you defining $P_{exploring}$ twice?
2) On line 40, you initially restrict $T> 1$ yet $T$ is set to values outside of this range. What is going on?
3) Re Lines 45 - 49: Do agents not already make decisions about next actions?
4) What is Gumbel MuZero's policy or how is it learned?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 1 poor
Contribution: 1 poor
Limitations: The authors provide a list of limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 1: Very Strong Reject: For instance, a paper with trivial results or unaddressed ethical considerations
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
1. Equation (4) is derived from equation (3) after concretising the policy p_normal to be the improved policy of Gumbel MuZero. The improved policy of Gumbel MuZero is described in detail in the Gumbel MuZero paper (see answer to question 4).
2. In line 40 we refer to training and the exploration policy $P_{exploring}$ (see Figure 4). In other contextes, there can be more eager uses of the policy and therefore temperatures may be chosen from the range $(0,1]$:
* In the original MuZero paper during training.
* During an eager playout $T\rightarrow 0$ is an option.
3. Of course, all RI agents make decisions about actions. The idea here is to have a separation of concerns and an architectural layering. "Planning" in the sense of having a model (as used in the MuZero paper) and simply asking for the best next action and an improved policy (as a probability distribution over the action space) that produces a better potential reward is a well-defined concern and a task in itself - the Gumbel MuZero paper elaborates on this task. Using the result of this planning could be as trivial as simply executing the proposed action, or it could be more complicated, like using a policy that takes the agent to an interesting place - perhaps one where it sees a lack of knowledge about the environment - and another where it tries its best to learn from that situation how its actions are rewarded from there.
4. Gumbel MuZero is described in detail in the [cited paper](https://openreview.net/pdf?id=bERaNdoegnO), which also provides an [example implementation](https://github.com/deepmind/mctx). "Gumbel AlphaZero and Gumbel MuZero, without and with model learning, respectively, are state of the art on Go, Chess, and Atari, and significantly improve prior performance in planning with few simulations".
In the Gumbel MuZero paper, equation 11 gives the improved strategy.
---
Rebuttal Comment 1.1:
Title: Thank your for your rebuttal
Comment: I thank the authors for their rebuttal and answering the few questions I had. However, as they do not address the main issues that cause the paper to suffer, my opinion remains the same and I will not be changing my score. | Summary: The paper presents a method to improve the exploration of the MuZero agent in games. The authors propose an hybrid policy that mixes an exploratory policy and the optimized policy. The exploratory policy is meant to reduce brittleness of optimal policies.
The new method is demonstrated on Tic-Tac-Toe.
Strengths: The paper addresses maybe the most important issue in decision making. The authors' approach to encouraging exploration in the decision tree addresses an important problem that could have significant implications if successful. Adversarial examples in games and self-play is certainly an interesting domain to investigate this.
The motivation is good, and the authors also evaluate their approach on a small game. The idea is simple and makes sense. The appendix is also nice, and the source code is included. Prior work is also well researched.
I like that the authors analyzed an example gameplay in Section 6.2
Weaknesses: Section 2 Recent Historical Background
- For relevant work on adversarial policies, I like that you included “Adversarial policies beat professional-level 330 go ais. arXiv preprint arXiv:2211.00241, 2022.”. Another paper that investigates this issue and I think should also be included is then “Timbers, Finbarr, et al. "Approximate exploitability: learning a best response." Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). 2022.”
- When it comes down to AlphaZero extensions, a good addition in this section would be "Player of games." arXiv preprint arXiv:2112.03178 (2021).”, as it extends AlphaZero to imperfect information games
Paper sometimes does not read that well and individual sections sometimes feel bit disconnected (i.e. little connection to other sections in terms of semantics and flow). E.g. the word curiosity is only really used at the beginning, and my understanding is that authors really just mean exploration (and indeed exploration is used in the rest of the paper). Also note that "curiosity" clashes with prior work on Intrinsic motivation (Oudeyer et al., 2007; Schmidhuber, 1991, ...).
The idea is very simple, adding exploration at the beginning, which AlphaZero already does using different methods. I am not sure if the main evaluation in Section 7, namely Figure 5 is entirely fair (I might be wrong). The figure compares exploration-on vs exploration-off, but I don’t think that exploration-off in this case collapses to the “baseline” case? (e.g. MuZero or AlphaZero, as both do also force exploration at the beginning)
This issue connects to the second one, which is that the experiments are not that well explained. It is fine that the authors moved many details to the appendix (which contains a lot of interesting information), but the main body of the paper still should include enough for people to understand what is presented.
But my biggest issue is that the main challenge in balancing exploration in learning in games is when the approximator can’t really memorize the full game. If it could, than any exploration just works as we get to see all the states and memorize them (i.e. no generalization is needed). The game presented now is too small to see the effect, I suggest the authors run it on e.g. connect four. I believe the current experiments are simply not sufficient.
Finally, I think the measure of whether the exploration helps or not should ultimately be exploitability rather than just a uniform measure of bad decision over all the game states.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Can you elaborate why you chose MuZero rather than AlphaZero to investigate the exploration issue? What’s the point of learning the model in your case?
Can you also measure exploitability for exploration on/off? I believe that is the ultimate metric in your case, and I also believe the suggested exploration would indeed help there.
How would the proposed method perform in more complex environments where the representational power of the estimator is limited with respect to the complexity of the full environment?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Your feedback is very helpful for us to improve.
Our main task is to create a MuZero implementation – that is set for us. We are aware that in moving from AlphaZero to MuZero, the model needs to additionally learn to move forward in time, allowing the agent to operate in environments without a perfect simulator. If we had to choose the simplest solution to a given problem, given a perfect simulator, we would choose AlphaZero over MuZero. We are also aware that AlphaZero and MuZero play to their strengths in environments with a huge number of possible states where "the approximator can't really memorise the full game". In search of a simple and fast low-level integration test, we - somewhat naively - choose the game Tic-Tac-Toe. Our expectation as a test result for the game Tic-Tac-Toe is that there is not a single wrong decision in the whole decision tree. We believe that by choosing Tic-Tac-Toe we have chosen a corner case where MuZero has limitations that we believe should be overcome in such a way that our Tic-Tac-Toe integration test passes, but certainly all MuZero's previous achievements in terms of effectiveness and - if possible - efficiency should also be reached. Although we have not achieved this goal here, we believe that we have taken a first step in this direction.
Thank you for introducing us to the two papers on exploitability. We started to dive into them, added them to the history section of the paper, implemented a perfect exploiter for our trivial tic-tac-toe test, ran the generation of a Figure 5 type evaluation, and look forward to using a "best response approximator" for cases like Go5x5 (our results there look promising) or Connect-Four. Aiming for exploitability as low as possible in multiplayer games is now a must for us. For the mentioned exploitations of go agents, it looks like the perfect metric.
In our exploitability metric, which we have just implemented, we always start from the beginning of the game. We realised - maybe we need to gain a deeper understanding of the two papers - that this has the limitation that the exploited agent could hide a weakness by favouring a subset of otherwise equal best choices (e.g. instead of having equal probability on all possible first moves in Tic-Tac-Toe, the exploited agent X could always choose a particular first move and thereby hiding a weakness following one of the never chosen first moves.). This limitation could be seen as a weakness if one also aims for stability against small environmental changes - nothing that we have to face in board games. Zero exploitability is therefore a strong goal but not the best that we could aim for. For the tiny game of Tic-Tac-Toe we would like to aim for the best. One remark: Adding Dirichlet noise in our implementation at planning entry adds a force to equalise the probability between otherwise equal options - this at least helps to avoid running into the aforementioned weakness of the exploitability metric.
Concerning your last question we could give some heuristics: Let us start with the configuration $T=1$ in equation (3). In this case, we are using the usual improved policy from planning to act in the environment and to train the model, but the value targets (equation 5) are only perfect for $t\geq t_{startNormal}$ and otherwise such that the value loss is essentially zero. We speculate that this makes the convergence simply slower, as policy learning stays the same and the loss force on the value expectation is partially the same as usual and otherwise zero. When increasing $T$ in equation (3) the character of the policy training and the suboptimal value training stays the same, but we get additional exploring of the environment in the vicinity of the policy. So the possible additional forces should only come from the new experiences made in the additional exploration of the environment. | Summary: Inspired by the failure of KataGo against an amateur-level agent, the author proposes to use an additional randomization scheme to encourage the agent to explore the less experienced part of the decision tree. Such a randomization scheme allows the agent to randomly deviate from the planned policy, and then switch back to learn the correct value function. Empirically, they evaluate their method on Tic-Tac-Toe and justify the effectiveness of their method.
Strengths: 1. The problem of exploring the less experienced part of the decision tree is interesting.
2. The paper provides detailed related work.
Weaknesses: The main concern of this paper is the contribution is marginal.
The proposed method is a minor modification (a randomization scheme) to the existing method. This is not to say the investigated problem itself is not important. But with such a minor modification, the paper needs to provide sufficient evidence to prove its effectiveness, either by theoretical analysis or a large body of experiments. However, I am afraid the paper provides neither. For the evaluation, the paper only evaluates the proposed method in a very simple scenario, Tic-Tac-Toe, which is way too easy. More experiments on more complicated scenarios are definitely needed.
The writing is unprofessional and should be improved substantially. For example, references should not be included in the abstract.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: n/a
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. | Summary: To improve prediction accuracy, this paper divides the training phase's policy into exploration and normal policies. The proposed algorithm is tested in the game of Tic-Tac-Toe, and different noise strategies are introduced for experimentation.
Strengths: The paper introduces a novel method for increasing policy exploration and experiments with different noise strategies for exploration.
Weaknesses: 1. The experiments lack persuasiveness as there is no comparison or theoretical analysis with advanced exploration algorithms.
2. The discrepancy between the mentioned "poor" predictions and the proposed exploration method is not fully addressed.
3. The setting of random time $t_{startNormal}$ is not clear.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: The application of the proposed algorithm in more complex tasks and its effectiveness compared to algorithms with other exploration techniques remains to be studied.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
1. We use a proven very strong algorithm with well-founded and tested exploration strategies, namely Gumbel MuZero as the baseline. During the implementation, we were looking for an integration test - and by choosing Tic-Tac-Toe as such a test, we believe that we have taken the very strong algorithm Gumbel MuZero into a corner case where it could not play out its strength and instead has a weakness. The MuZero paper showed that the algorithm mastered three board games with a large number of states and that it mastered classic Atari games as an example of cases where the environment doesn't come with a perfect simulator.
We believe that the strong algorithm of Gumbel MuZero should be adapted in such a way that it retains all its achievements, but does not produce a single wrong decision on the full decision tree of the trivial game Tic-Tac-Toe. We believe that we have made a step in this direction with this paper, but we have clearly not reached this goal yet. The algorithm we use inherits from Gumbel MuZero in that the policy targets are set exactly the same way and the value targets are set the same way for $t\geq t_{startNormal}$ and otherwise such that the value loss is zero. It is a bit like starting the usual Gumbel MuZero environment playouts from different points on the decision tree, but getting there from the start of the game and on the way to the start points still benefiting from the policy improvement by planning, but doing nothing about the value because the actions there are off-policy.
2. We are not sure if we have understood your second question correctly.
With Gumbel MuZero we have a search algorithm that - with mathematical proof in the Gumbel MuZero paper - guarantees a potential policy improvement for any number of in-mind planning steps in the context of a given model. The weakness, however, is that if the model is not able to represent the reward situation in the real environment sufficiently accurately, then even an improvement in policy relative to the imperfect model may result in a worse policy relative to the real environment. An illustration of a particular case is given in Figure 6.
3. Before playing a game, the time $t_{startNormal}$ is chosen as a random number from the interval $[0, t_{end})$, where $t_{end}$ is taken as the maxGameLength from the games in the game buffer. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning To Dive In Branch And Bound | Accept (poster) | Summary: The authors propose L2Dive to learn dataset-specific diving heuristics with graph neural networks. Specifically, they employ generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions based on the model’s predictions. Experiments show improved performance on both diving and branch-and-bound tasks.
Strengths: 1. Interesting research topic. The diving heuristic can be well formulated as a 0/1 prediction task. Thus, employing generative models for this task is meaningful and practical.
2. Novel algorithm design. The leverage of the duality property for variable selection is insightful, as it is not obvious to design such approach compared to the design of generative models.
3. Well written paper. The presentation is really clear, which makes the readers easy to follow.
Weaknesses: 1. Inefficient literature research. The main idea used in this topic is similar to that in [1] in the node selection task. Personally, I think the diving task and the node selection task on binary problems are really similar, and the GNN model proposed in [1] can be used in this task directly without much adaptations. Thus, the introduction and comparison to [1] is required.
2. (I am not sure for these results in my memory. So if I am wrong, please inform me the correct results.) Confusing results in Table 1. In my memory, after presolve, SCIP can find optimal solutions easily at the root nodes on most instances from the four benchmarks the authors used. That is the reason why these four benchmarks are usually used for the variable selection task (dual task). Thus, I am confused that whether the results in Table 1 is convincing enough, as the benchmarks used here may be too easy for this task. (I am not sure about this comment)
3. Insufficient ablation study. The comparison bettween dual theory based variable selection and vanilla variable selection is missing. Thus, I am not sure whether the proposed dual based selection strategy is effective. The relationship between the complexity of the generative models and the final performance is also missing. The precision increases with more complex models, while the inference time also increases. Thus, there may be a trade-off for model design. I searched for the keyword "ablation" in both the main paper and the Appendix but have not found any results. For better completeness, it is better to include them in the *main paper*.
4. Insufficient comparative experiments. The generalization ability for this approach is missing. Thus, what is the performance of L2Dive on larger instances? Moreover, I observe that the results on the branch-and bound tasks are relatively limited. Thus, I also doubt about the overall performance on the B&B task as the generalization problem in this setting is usually more severe.
[1] Khalil, Elias B., Christopher Morris, and Andrea Lodi. "Mip-gnn: A data-driven framework for guiding combinatorial solvers." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 9. 2022.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The authors describe in line 257 that they "validated and tested on 500 instances". I am not sure whether they validate and test the model in the same dataset, which is discouraged in practice.
2. What is the standard deviation of the results in Table 1&2? As the solving time for different instances varies a lot, can you provide the geometric mean of these results as that in [2,3]?
3. I cannot find any code availability claims in both the paper and the appendix. I think this is not encouraged for an AI conference like NeurIPS. Can the codes of this paper be available once the paper is accept?
I would like to raise my score if the authors reply well (especially question 3) to the above weaknesses and queations.
[2] Gupta, Prateek, et al. "Hybrid models for learning to branch." Advances in neural information processing systems 33 (2020): 18087-18097.
[3] Achterberg, Tobias. Constraint integer programming. Diss. 2007.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: See weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you. We made revisions based on your feedback.
**Thank you very much for your thoughtful review. We have made revisions to our paper based on your feedback and address your concerns in more detail below.**
## We performed additional ablations to isolate the effects of dual reasoning.
>*“Insufficient ablation study. The comparison bettween dual theory based variable selection and vanilla variable selection is missing.*
Thank you for this suggestion! We fully agree that this ablation is a good idea! And it was also proposed by Reviewer gbBh. We have carried out an ablation study on capacitated facility location, where we evaluated
- L2Dive with $s_j = q_\theta(\hat{x}_j)$, i.e. only using model confidence, not using dual reasoning.
- L2Dive with $s_j = \text{Uniform}(0, 1)$, i.e. a random variable selection, not using model confidence, not using dual reasoning.
The results are shown in the attachment. Even with a random scoring rule, L2Dive outperforms the best heuristics. However, the results significantly improve when using model confidence, and they slightly improve when additionally using dual reasoning. Since using dual reasoning comes at negligible additional costs, because the dual values are readily available in the solver, we choose to use dual reasoning. We have added these results to the main text along with a short discussion.
## Khalil et al. is relevant and related, but different.
>*“main idea [...] is similar to that in [1] in the node selection task [...] introduction and comparison is required.”*
Thank you for bringing Khalil et al. [1] to our attention. This is a cool paper! Like us they leverage supervised learning on solutions of MILP to train a GNN for predicting variable assignments, (albeit both our model architecture and objective differ from theirs). But most importantly, as you note, they propose a heuristic rule to perform node selection from the model prediction, while we consider diving. One of our key contributions is to propose a method for using a generative model for diving (section 3.2). Our approach offers several benefits, e.g., (a) depth-first search for node selection is known to be problematic, because node selection needs to balance the primal and dual objective, while diving is solely concerned with the primal objective (Achterberg, page 73), (b) diving can leverage the fast LP solver rather than fully (and time-consumingly) resolving each node in the search tree, (c) since diving is conducted in probing mode, a “bad” dive leading to many open nodes can simply be aborted without impacting the solver state, while bad node selection decisions must be undone via a solver restart.
We think this is a valuable discussion. Therefore, we have added a reference to Khalil et al [1] in the main text, and clearly highlight similarities and differences.
## Problem instances are large-scale and experimental protocol is thorough.
>*“results on the branch-and bound tasks are relatively limited. [...] performance of L2Dive on larger instances? ”*
We politely disagree. The instances we consider in our experiments in Section 5.2 are large-scale. For example, the load balancing instances contain 61,000 variables and 64,410 constraints per instance prior to pre-solve and more than 4,000 variables and 4,000 constraints after pre-solve. Similarly, the neural network verification instances are of the same order of magnitude.
This is comparable or larger to the “hard” instances considered by Gasse et al. (2019). In addition, our instances are harder to solve. Not a single load balancing instance can be solved with the SCIP solver within 60 minutes, while the results reported in Gasse et al. suggest that their instances can be solved on average in 2 minutes (set cover) to 35 minutes (independent set).
Moreover, our experiments on branch and bound are extensive and thorough. Notably, we compare against (and outperform) a strong “tuned” solver baseline, something that is arguably important fto benchmark machine learning methods for branch and bound against, but rarely done (with some notable exceptions in [Nair et al. (2020), Sonnerat et al. (2021), Chmiela et al. (2021) ]. We establish a thorough protocol (Appendix E) for evaluation where our baseline receives the same budget of solver calls for tuning as L2Dive expended for data collection.
Overall, we consider a total of six datasets.
## Experiments in 5.1 are for comparing diving heuristics
>*“benchmarks [used in 5.1] too easy for this task?”*
Thank you for raising this point. Our goal for the experiments in Section 5.1 was to study the diving performance of L2Dive and compare it to existing diving heuristics. While it is true that these instances can be solved relatively quickly with a full branch and bound solver, our results suggest that they are challenging for diving heuristics (larger primal gaps), and thus these instances are appropriate and provide a useful testbed. We chose these instances, because they have been adopted by prior work in machine learning for branch and bound and represent a diverse set of combinatorial problems.
## Validation and test set are different!
> *“validate and test the model in the same dataset?”*
Absolutely not! We apologize for the confusion. Validation and test set are different! Please see response to qRaC
## Code will be made publicly available upon acceptance.
> *“Can the codes of this paper be available once the paper is accepted?”*
Absolutely! We will make all experimental code publicly available upon acceptance. In addition, we are in discussions with the SCIP/ PySCIPOpt development team to contribute a new diving plug-in available to the PySCIPOpt library. We hope our contributions will facilitate further research into machine learning for branch and bound generally, and diving heuristics specifically.
---
Rebuttal Comment 1.1:
Comment: Thanks you for the response. Based on your additional experimental results and the reviews from other reviewers, I raised my score to 5 (borderline accept).
However, there are still several questions I am concerned about:
- Related work about node selection. *depth-first search for node selection is known to be problematic, because node selection needs to balance the primal and dual objective, while diving is solely concerned with the primal objective*. In fact, I found most existing research on node selection only focus on finding a good primal solution. Perhaps considering primal and dual objectives together could be extremely challenging.
- Benchmark selection. I ran SCIP again on SetCover and IndSet to check my previous reviews. As I expected, SCIP achieves best primal solution on most instances *at the root node*. which suggests that these problems are too simple for primal task evaluation. The four benchmarks are widely used for dual tasks, but are not the best choices for primal task.
Anyway, I still agree L2Dive is valuable for the community of combinatorial optimization, as currently, the research (*with open-sourced codes*) is relatively limited.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback!
Comment: > *I still agree L2Dive is valuable for the community [...]*
Thank you! We are glad you approve of our work! We briefly respond to your two outstanding concerns below.
---
> *[...] research on node selection only focus on finding a good primal solution*
Thank you! We were referring to Achterberg (Constraint Integer Programming, 2009) in our previous response, and would like to share: *The selection of the subproblem that should be processed next has two usually opposing goals within the MIP branch-and-bound search: 1. Finding good feasible MIP solutions to improve the primal (upper) bound, which helps to prune the search tree by bounding, and 2. Improving the global dual bound (Page 73).*
Achterberg discusses depth-first search for node selection in section 6.1 and discusses *best-first search* in section 6.2 which *”aims at improving the global dualbound as fast as possible"* (Page 74). We agree that balancing primal and dual objectives for node selection is extremely challenging and subject to ongoing research. In our work, we focus on the task of finding good feasible solutions via the diving subroutine.
> *SCIP achieves best primal solution on most instances at the root node [for set cover and independent set]*
We would like to clarify that the discrepancy is explained by the fact that we disable separation and other heuristics (cf Section 5), as our interest is in measuring diving performance without confounding with other parts of the solver. Our results (Table 1) indicate that these instances are challenging for existing diving heuristics in this setting. In particular, switching off separation can make the instances significantly harder for diving heuristics to successfully solve. We would like to affirm that in section 5.1 our main interest is comparison to existing diving heuristics, while in section 5.2, we consider diving in a full-solver environment to show improvements in overall solver performance with *L2Dive* on *other* large-scale instances. We also highlight the additional ablation study we performed for the experiments in section 5.1, where we show that *L2Dive* generalizes gracefully to larger instances (cf. Comment to Reviewer gbBh). These instances tend to be harder for SCIP, but the insights they afford in comparing diving heuristics corroborated our earlier findings on the smaller-sized instances. | Summary: This paper introduces a new framework to improve B&B MIP solvers by neural networks. This paper proposes a neural network-based primal heuristic, namely L2Dive. The authors implement L2Dive with SCIP and conduct extensive experiments on several datasets.
---------------------
Post-rebuttal: Thanks a lot for the response. I feel positive about this paper and will vote for accept.
Strengths: * This paper shows the feasibility of using neural networks to replace & improve existing primal heuristics.
* The design details of L2Dive seem technically sound.
* The experiments are extensive and convincing.
Weaknesses: * The authors should provide the timing statistics of different primal heuristics in Table 1.
* As I know, Nair et al. [32] proposed "neural diving" and the conceptual idea is very similar to the contribution of this paper. The authors should make more efforts to address the original technical contributions compared to [32] and try to compare with [32] in experiments.
* Some details of the approach remain unclear to me:
* The neural network is trained with high-quality solutions, while the output of the neural network represents the score of whether a variable should be dived. How do you connect these two? What is exactly the loss function during training?
* How many times is the neural network called when running Algorithm 1?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * Why different measurements are considered for different tasks in Table 2?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your review.
**Thank you very much for your feedback. We address your concerns in more detail below.**
## Other primal heuristics are complementary to diving, no heuristic rules them all!
>*“The authors should make more efforts to address the original technical contributions compared to Nair et al. [32] and try to compare with [32] in experiments.*
We view Nair et al. [32] as complementary, because it is a primal, but not a diving heuristic. In our experiments in section 5.2, we deploy L2Dive within branch and bound, and our model is called by the solver along with many other subroutines (including other primal heuristics) and the method by Nair et al. [32] could also be used in addition to improve performance further. Intuitively, we expect L2Dive to be particularly effective for instances where the solutions of linear programs can be rendered integral easily, and we expect Nair et al. [32] to be particularly effective on instances, where the overhead of solving sub-MILPs in place of linear programs tends to be small.
Overall, the goal of our work is not to demonstrate the superiority of diving heuristics over other primal heuristics, but to improve on existing diving heuristics with machine learning. We believe diving heuristics play an important role in branch and bound, but so do other primal heuristics, and which method is more appropriate will depend on the particular problem instances under consideration. This is the reason why we compare primarily against other diving heuristics.
We think this is an important discussion, and we have expanded our related work section to include it.
## Variable selection is based on model confidence and dual reasoning. Model is only called once.
>*"The neural network is trained with high-quality solutions, while the output of the neural network represents the score of whether a variable should be dived. How do you connect these two? What is exactly the loss function during training?"*
You are spot on! Our objective is to minimize the KL divergence in equation (7) and at training time our loss function is a mini-batched average of this KL divergence. Our model is only learnt from good feasible solutions, there is no direct supervision on diving or variable selection. While this choice offers several advantages, e.g., much cheaper to collect training data, the model is trained to only predict correct variable assignments, but not what variables to choose to fix for diving.
To bridge this gap, we suggest the rule in equation (8) that balances the model confidence in a prediction with duality theory to encourage shorter dives. Intuitively, the dual reasoning rule is justified if the prediction is correct. The better the prediction, the more confident the model, the better the dual reasoning rule is expected to work. Our empirical results along with the additional ablations we performed support our choice.
>*“How many times is the neural network called when running Algorithm 1?”*
Thank you, this is an important question! For any dive, we only call the model once at the beginning of the dive. While it’d be possible to call the model more frequently to potentially improve predictions, this keeps the in-service overhead of our method low and simplifies considerations for data collection.
## Load balancing instances are hard, L2Dive is fast.
>*“Why are different measurements considered for different tasks in Table 2?”*
Table 2 reports the primal-dual integral for instances from load balancing, but solving time for instances from neural network verification. The goal of Table 2 is to report the overall solver performance for different methods on each task (dataset). Overall solver performance is naturally measured by solving time, i.e., the total time it takes to completely solve an instance (primal-dual gap is zero). However, the load balancing instances are so difficult to solve, that they cannot be solved in a reasonable amount of time. For example, not a single instance can be solved using SCIP (Default) within 60 minutes. Therefore, we set a time limit of 900 seconds and report the primal-dual integral instead. The primal-dual integral is an accepted measure of solver performance (Achterberg et al.).
Thank you for raising this point. We now highlight the reason for reporting the primal-dual integral for load balancing in Table 2 more prominently.
>*The authors should provide the timing statistics of different primal heuristics in Table 1.*
We are happy to report the overall execution times for all divers in the attachment and will include them in the appendix, but did not do so initially, because they are largely negligible. On set cover, independent set and combinatorial auctions, all divers take less than a second to run, while on facility location all divers tend to run in between three to four seconds. For all instances, execution time is mostly correlated with the depth of the dive. A method that renders an instance quickly infeasible and thus aborts the dive (unsuccessfuly) will have a small execution time, while a method that dives deep will have a longer execution time, but not necessarily a good solution (for example Lower on set cover runs 50% longer than L2Dive). L2Dive naturally incurs a small overhead for calling the neural network, but it also consistently finds better solutions (Table 1 in main text).
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the response. I feel positive for this paper and will vote for accept. | Summary: This paper presents a technique for heuristically generating good feasible solutions for mixed-integer programming problems by leveraging known, good feasible solutions for related problem instances. The technique is a "diving" heuristic, which subsequently fixes subsets of the integer decision variables, and uses a generative model (and a bit of LP duality) to determine the subsets. A computational study suggests that their technique outperforms the diving heuristics contained in the SCIP solver.
Strengths: This paper falls squarely in a line of research (learning for optimization) that is of substantial interest to the NeurIPS and mathematical optimization communities. The ideas are interesting and nontrivial (the application of LP duality is neat and clean, if not especially deep). The contributions are, to my reading, quite solid, though the general area in which they are operating has been studied for some years. The paper is very well-written and, to my understanding, original, though there are a handful of closely related papers that the authors situate themselves against explicitly.
Weaknesses: As the authors themselves recognize (to their credit), the computational baseline against SCIP's existing diving heuristics may be a weak one. But, as SCIP is a best-of-breed open MIP solver and this study requires tight integration into the solver, I do not see this as something that can be held against this paper.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * L49: Unfinished sentence.
* L149: What are "trivial solutions"?
* Proposition 1: The paper should be self-contained, and not require the appendix to comprehend. I'd suggest listing the dual LP in the text, and perhaps move some of Section 2.2 to the appendix if space is needed.
* Citation [17] has broken typesetting.
* L257: Can the authors please confirm that a _different_ set of 500 instances are used for validation and testing?
* Section 5.1 could be further strengthened by comparing against other, non-diving, heuristics in SCIP.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you for your positive reception of our work.
**We are happy to integrate the dual LP into the main text. We have fixed the typos you found in the main text. We address your outstanding questions below.**
>*“Please confirm that a different set of 500 instances are used for validation and testing?”*
Absolutely! We apologize for the confusion. They are different! Specifically, for each dataset considered in Table 1, we generated 2000 (iid) instances in total, and used 1000 of these instances for training, 500 of these instances for validation and the remaining 500 of these instances for testing. This question was also raised by Reviewer cl66 and we have edited the main text to make this crystal clear.
>*“Existing diving heuristics may be weak, [... cannot] be held against this paper”*
Thank you! We agree and would like to confirm that unfortunately Gurobi’s diving code is closed-source to date and thus does not facilitate any experiments with *L2Dive*. We also highlight that we compare against a *tuned* diving ensemble in section 5.2 which significantly improves over SCIP’s default setting and is a strong baseline for direct comparison.
>*“What are trivial solutions (Line 149)?”*
We were originally referring to the trivial solutions that SCIP tests in the beginning of a solve, e.g., setting all variables to zero. However, we believe that “poor feasible solutions” is much clearer in this context. Thank you for flagging this, we have changed the main text!
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for this feedback authors. This will be taken into account. | Summary: The paper develops a learning strategy to enhance variable selection in primal diving heuristics. More specifically, such a methodology relies on a generative model based on graph neural networks to predict the likelihood of variables assuming specific values. The predictions of the mean values are then integrated with dual reasoning (i.e., determining whether tightening would lead to an optimal linear programming solution) to decide the next variable to dive into. Numerical results evaluate the impact of the heuristic on a class of four combinatorial optimization problems, as well as its impact on a complete branching method for load balancing and neural network verification instances.
Strengths: + Well-motivated application.
+ Model is nicely designed.
The paper contributes to the research stream of machine-learning methods incorporated into combinatorial methods. I found that the generative model is compelling and fits well within diving heuristics, and it could as well be applied to other similar primal heuristics. The solution augmentation is indeed a challenge for training, but I appreciate the discussion and I also found the counting strategy quite novel (albeit possibly expensive?). I imagine that could be considered in other contexts as well, especially when providing sufficiently diverse solutions.
Weaknesses: - Numerical results lack detail and more thorough analysis.
- Presentation needs a few revisions.
My primary concern is that results are not examined in detail; the proposed L2Dive technique completely dominates the other heuristics on the primal gap (Table 1) and has very small standard errors; for n=1,000 instances, this would indicate that the variance is quite small across a large instance class. This is somewhat rare in primal heuristic development and for such a variety of instances, and it is unclear from the text why this is happening. More specifically:
(a) The text does not detail how instances of the four combinatorial optimization problems are selected. In particular, it refers to reference [17] for the benchmark tested, but in that paper they utilize multiple combinations of "easy", "medium," and "hard" cases for each class; I am not sure which ones are considered in the paper. The training set choice is not clear as well. For instance, given that Erdos-Rényi instances are so particularly structured, are the independent set results really representative in practice in this particular context? It could be more beneficial to consider, e.g., DIMACS cases, which present a more diverse structure.
(b) The paper only reports the non-normalized $\textit{absolute}$ primal gap, which is the difference between the primal solution value and an optimistic bound. In this case, it is difficult to assess how significant the improvements are; for example, "222" and "256" could effectively have a small difference in terms of the primal-dual gap in combinatorial auctions if the bound is large.
(c) One important missing analysis is the relevance between the model $q_{\theta}$ and the prioritization given by Proposition 1. First, how would LP2Dive behave without the indicator term from Proposition 1? Or, in other words, is the learning effective? Second, what would happen to the other diving heuristics (coefficient, Farkas etc.) if variables are first prioritized by Proposition 1?
I also believe the presentation needs to be revised. In particular:
- There are several typos and incomplete sentences. For example, "verificationw" (l.13), missing "in Figure 1" (l.50), adjectives miss hyphen (branch and bound tree --> branch-and-bound tree), and errors in references (e.g., in [17])
- Table legends should start with what the table represents and not with what their aims, as that is a bit confusing.
- Figure 1 is a bit informal and not needed, as the overall sequential process is relatively easy to understand.
- One suggestion is that $\pi$ is typically used as dual variable as opposed to a bound. Perhaps $\ell$ and $u$ could be clearer?
** Updated score after thorough discussion with the authors.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. From my understanding, if variables are discrete then $q_{\theta}$ provides a vector of means for a Bernoulli sequence. How would $j^*$ be calculated in this case?
2. Could you the authors kindly comment on (a), describing how instances for both the training and evaluation sets are picked?
3. Have the authors performed experiments with/without Proposition 1?
4. Would it be possible to augment any of the diving heuristics with Proposition 1 to evaluate if they would improve performance?
5. Could the authors also include table with the primal-dual gap?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Limitations were properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Thank you. We made revisions based on your feedback.
**Thank you very much for your thoughtful review. We have made revisions to our paper based on your feedback and address your concerns in more detail below.**
## We performed additional ablations to isolate the effects of dual reasoning.
>*“experiments with/without Proposition 1? [...] L2Dive without the indicator term in Proposition 1? [...] is the learning effective?”*
Thank you for this suggestion! We fully agree that this ablation is a good idea! And it was also proposed by Reviewer cl66. We have carried out an ablation study on capacitated facility location, where we evaluated
- L2Dive with $s_j = q_\theta(\hat{x}_j)$, i.e. only using model confidence, not using dual reasoning.
- L2Dive with $s_j = \text{Uniform}(0, 1)$, i.e. a random variable selection, not using model confidence, not using dual reasoning.
The results are shown in the attachment. Even with a random scoring rule, L2Dive outperforms the best heuristics in terms of primal gap. However, the results significantly improve when using model confidence, and they tend to slightly improve when additionally using dual reasoning. Since using dual reasoning comes at negligible additional costs, because the dual values are readily available in the solver, we choose to use dual reasoning. We have added these results to the main text along with a short discussion.
>*"Would it be possible to augment any of the diving heuristics with Proposition 1 to evaluate if they would improve performance?"*
We agree that this is an interesting avenue for future work in operations research, but we are primarily concerned with developing machine learning methods. Since we rely on the plug-in to call all standard divers in SCIP (Section 3.3), using a dual reasoning rule for existing diving heuristics will require editing their source in SCIP directly and is out of scope for this work. However, we will release our code publicly upon acceptance and hope that this will facilitate research in this direction.
## Experiments in 5.1 are intended to assess primal performance.
>*““only reports non-normalized absolute primal gap […], what about primal-dual gap?“*
We report the primal gap in Table 1, because this is the bound that any diving heuristic will effectively improve. We choose to report the absolute primal gap, because it is conceptually easier. We were initially concerned about using the relative gap as defined in SCIP, because it is ill-defined if the primalbound and objective value have opposite signs and it may choose to normalize the absolute gap using either the primalbound or the objective value which can hinder comparisons between divers.
However, we are happy to report the relative primal gap where we always normalize by the absolute objective value (we found that for all instances, primalbounds and objective values are of the same sign) in the attachment and now include these additional results in the appendix.
The experiments in section 5.1 were designed to study the diving performance of L2Dive and compare it against existing diving heuristics without confounding the results with other subroutines of the solver. Therefore, we did not branch and disabled cutting planes and other primal heuristics. Hence, we do not compute the dualbound or the primal-dual gap in these experiments, they are naturally weak. We focus on overall solver performance (primal-dual integral, solving time) in our experiments in section 5.2.
## Data is appropriate for our experimental goals.
>*““How instances of the four combinatorial optimization problems are selected?” Are independent set results really representative […]?“*
We use the “easy” (i.e., smaller-sized) instances for our experiments. These instances were randomly generated with exactly the same procedure as described by Gasse et al using the generators made available in Ecole [Prouvost et al., 2020]. We now include the explicit reference to the Ecole library in the main text.
We chose these instances, because they collectively represent a diverse set of combinatorial problems and they have been adopted by previous work [e.g., Gasse et al. (2019), Gupta et al. (2020), Scavuzzo et al. (2022) among others]. We believe there is value in using the same problem benchmark. We chose the “easy” instances, because they allowed us to test many methods at scale with relatively modest computational resources. In addition, our results indicate that they are “sufficiently hard” for existing diving heuristics (larger average primal gaps). Thus, they provide a useful test bed. In section 5.2, we focus on overall solver performance in real-world application and use harder and larger instances.
## Table 1 reports the standard error of the mean, not the standard deviation of the sample.
>*“The primal gap (Table 1) has very small standard errors. [...] for n=1,000 instances, this would indicate that the variance is quite small across a large instance class.“*
We apologize for the confusion. Table 1 reports the standard error of the mean (SEM), not the standard deviation of the sample (all n observations). Specifically, the standard error of the mean is defined as
$$
\text{SEM} = \frac{s}{\sqrt{n}}
$$
where $s$ is the standard deviation of the sample computed for $n=500$ observations. Naturally, as the sample size $n$ increases, the standard error of the mean is reduced, as the estimate becomes more accurate. We choose to report the standard error of the mean, because statistical significance can be more easily gauged (e.g., via t-test) from it than from the standard deviation of the sample. We find that the standard deviation on all four datasets is of the same order as the standard deviation of the objective value and find no anomaly. We are happy to report the standard deviation in the main text or the appendix if others find it helpful.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses.
I believe the results concerning the optimality gaps are more convincing and place the approach in a better light. It is difficult to grasp what absolute primal values mean as problem classes are quite diverse.
The ablation is also quite intriguing. What made model confidence so impactful for the facility location instances?
Further, what happens when diving is applied to the "difficult" instances? Even if the benchmark is appropriate as a means to test baseline methods, I would imagine that the contribution could be more impactful if showing how ML-based help address those challenging problems.
Finally, I did understand what the standard error was; my comment is that the variation is small because $\sqrt{500}$ is not large relative to the primal gaps reported. My question remains on why variance was not high, as typically seen in more heuristic approaches.
---
Reply to Comment 1.1.1:
Title: Thank you for your feedback! New ablation!
Comment: Thank you for your feedback! Please see our individual responses below
> *the results concerning the optimality gaps are more convincing*
Thank you! We do agree and we are happy to report results for the optimality gaps rather than the absolute primal gap in the final manuscript!
> ”ablation is also quite intriguing. What made model confidence so impactful [...]?
Thank you for your interest and for having suggested this ablation! We observed the predictions in which the model was highly confident (and which were thus chosen when model confidence was used) tended to have a higher accuracy (in predicting the optimal solution) than those where the model was less confident. When model confidence was not used, it was more common to fix variables to values that rendered the problem infeasible earlier in the dive, and thus led to poor solutions. We are happy to add this discussion to the results of the ablation.
> *“What happens when diving is applied to the "difficult" instances?” [...] showing how ML-based help address those challenging problems*
As you recognized, the main objective of our experiments in section 5.1 was to test many methods at scale with relatively modest computational resources (computing primal gaps requires solving each test instance to optimality). We consider large-sized instances in a full-solver environment in section 5.2 where we report total improvements in overall solver performance with *L2Dive* which is arguably our most important experimental contribution.
However, we did not want to shy away from your question! We took the opportunity to evaluate L2Dive and all baselines divers on 100 instances of the “hard” capacitated facility location instances, which are the largest-sized instances of the four classes. However, we did not train our model on the “hard” instances, instead we evaluated the model that was previously trained on the “easy” instances. This is in line with the suggestion of Reviewer cl66 who proposed a “generalization” ablation. The results are displayed below:
| Diver | Relative Primal Gap (standard error) |
|:----------|----------:|
| *L2Dive* | **1.43 (0.11)** |
| Coefficient | 5.35 (0.17) |
| Distribution | 5.05 (0.17) |
| Farkas | 3.16 (0.11) |
| Fractional | 9.38 (0.20) |
| Linesearch | 4.56 (0.19)|
| Pseudocost | 2.25 (0.10) |
| Vectorlength | 3.09 (0.19) |
| Random | 4.10 (0.16) |
| Lower | 4.01 (0.13) |
| Upper | 3.17 (0.15) |
First, we observe that the optimality gap (relative primal gap) tends to be larger or similar for most heuristics which is expected as the instances are more difficult. Second, we observe that L2Dive still outperforms standard diving heuristics on the larger instances which suggests that the model is able to generalize to larger instances, even though it was only trained on the smaller instances.
We think these results are interesting and strengthen our work and we are happy to include them in the final manuscript.
> “why variance was not high, as typically seen in more heuristic approaches?”
Thank you for your clarification!
- First, we observe that the standard deviation for both L2Dive and other primal heuristics approaches is comparable on all four datasets (Table 5.1). It tends to be slightly larger for heuristics that struggle to find good solutions on average (e.g., Coefficient on Set Cover). We observed that this tends to be, because these heuristics find poor solutions on most instances, and good solutions only a few instances, which tends to increase variance, compared to better performing heuristics and L2Dive that consistently find good solutions.
- Secondly, we generally expect the standard deviation of the primal gaps to be higher on datasets where the structural differences between instances tend to be larger and their objective values may fluctuate more widely (possibly by orders of magnitude). The standard deviation of the objective value on our test sets are 26, 323, 683, 4 for set cover, combinatorial auctions, facility location and independent set respectively, and thus corroborate this intuition. To generate these instances, we used the standard generators from Ecole library with default parameters as reported in Gasse et al. (2019) which was used in other work.
- Finally, we’d like to affirm that the focus of our work is to learn problem-specific diving heuristics for particular applications. Hence, instances that share structural commonality and may feature smaller standard deviation in their objective value do not seem inappropriate. However, we are aware that there are limitations in learning diving heuristics for specific applications (cf. Broader Impact) and there is value in studying the transfer of models and the development of general-purpose models that perform well on diverse problem collections. This is a natural direction to explore in the future, for this and for other work at the intersection of machine learning and integer programming. | Rebuttal 1:
Rebuttal: ## Thank you for your thoughtful feedback!
**We have incorporated the feedback of the reviewers and made the following revisions to our draft:**
- We present an additional ablation study (see attachment!) on capacitated facility location to study L2Dive’s variable selection (dual reasoning and model confidence) in more detail. We propose to add it to the main text along with a short discussion and interpretation of the results.
- We report additional metrics (e.g., relative gap, execution time, see attachment!) as suggested and propose to add them to the main text/ appendix.
- We added discussions of relevant related work as suggested in the main text.
- Small edits (typos, clarifications, etc.), thank you all for flagging any you found!
Please see individual responses for more detail below.
Pdf: /pdf/570efc7aae173e8a50adba6887a3a4cae765f41a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Parallel Submodular Function Minimization | Accept (spotlight) | Summary: This paper considers the problem of parallel submodular function minimization, where the function is assumed to be integer valued with range $[-M, M]$. The authors propose two algorithms; one which runs in $\tilde{O}(n^{1/3} M^{2/3})$ rounds with $\tilde{O}(n^2 M^2)$ query complexity, and another which runs in 2 rounds with $n^O(M)$ query complexity. This improves over existing polynomial time submodular minimization algorithm which run in $\Omega(n)$ rounds.
Strengths: The problem of minimizing submodular functions efficiently in a highly parallel manner is a natural one to consider. Several lower bounds have been derived for this problem, while upper bounds are not well investigated, except for ones resulting from sequential algorithms which run in $\Omega(n)$ rounds, and brute force search which runs in 1 round but uses $2^n$ queries. The algorithms proposed in this paper are the first upper bounds for this problem to improve over these results.
The first algorithm result also follows from improving the parallel complexity for minimizing $\ell_\infty$-Lipschitz convex functions from $\tilde{O}(n^{2/3} / \epsilon^{2/3})$ to $\tilde{O}(n^{1/3} / \epsilon^{2/3})$ rounds, which is tight in terms of dependence on n.
The results are correct and presented clearly.
Weaknesses: The results of this paper are mostly based on existing work and two rather simple observations: a reduction from constrained to unconstrained optimization of L-Lipschitz functions and that convolving an $\ell_\infty$-Lipschitz function with a Gaussian changes the function less than in the case of $\ell_2$-Lipschitz functions.
The proposed algorithms are mostly of theoretical interest, I expect that the first algorithm would not be efficient in practice (it is not clear how large is the universal constant C), and the second algorithm is essentially an exhaustive search over all M-cardinality sets, which is only efficient for very small M.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Does Theorem 1.1 extends to minimizing non-integral valued submodular functions up to epsilon additive error? As far as I can tell the same proof holds. If yes, it would be good to include that.
- [ALS20] showed how to compute a sparse stochastic gradient of the lov\'asz extension of a submodular function using $O(1)$ queries. Would it be possible to reduce the query complexity of the first algorithm using a similar strategy as in [ALS20]?
Minor issues:
- The section on related work is a bit repetitive. I suggest only mentioning additional related work not already discussed in the introduction.
- In Lemma 2.3, RHS of Eq (1), $f^{c,r}\_{reg}(y^{c,r})$ should be $f^{c,r}\_{reg}(y)$
- In definition 1.2, $\| f(x) - f(y)|$ should be $| f(x) - f(y)|$
- In Lemma 2.5 and the discussion above it, you use d for the dimension, instead of n as in the rest of the paper.
- In Eq. (3), the term inside the expectation should be $f(x - y)$
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors clearly state the theoretical complexity of their algorithms, along with the assumptions required for their results to hold. But they don't address the practical performance of their algorithms
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a careful reading of our paper and writing the review.
In the weakness section, the reviewer points out that our results follow from existing work and two observations namely (i) our reduction from constrained to unconstrained for Lipschitz functions and (ii) that we can convolve with a larger Gaussian for $\ell_\infty$ Lipschitz functions. We believe that our broader contribution is the connection we make between parallel SFM and $\ell_\infty$-optimization and the near optimal depth algorithm for $\ell_\infty$-optimization for constant $\epsilon$, both of which are important and well-established problems. Though the observation that the quality of the Gaussian convolution for smoothing $\ell_\infty$ (as opposed to $\ell_2$) Lipschitz function is straightforward, that we show it leads to near-optimal $\ell_\infty$-Lipschitz convex optimization in any regime is particularly striking. Furthermore, to our knowledge the simple reduction we give from constrained to unconstrained for Lipschitz function was not explicitly known; we believe this may be useful for broader applications in convex optimization. That we make this progress by simple technical insights is not necessarily a bad thing. Finally, we think our results might shed light on how the Lovasz extension of submodular function is similar to a general $\ell_\infty$-Lipschitz function, which aids algorithm design for parallel SFM more generally.
The reviewer also wonders about the practicality of our algorithms. Indeed, the main goal of this work is to theoretically investigate the parallel complexity of SFM. The empirical performance of our first algorithm would depend on the empirical performance of the [Carmon et al. 2023] algorithm.
The reviewer also asks some interesting questions! Thank you! To answer the first question as to whether Theorem 1.1 extends to minimizing real-valued functions, indeed it does, and we will mention this in the next iteration of the paper. The second question is whether the techniques from ALS20 could speed up the algorithm. While we are not sure of its answer, the approach in ALS20 for sampling subgradients via a data structure depends on the interplay between submodular properties of the Lovasz extension and the structure of the iterative methods applied. It is not immediately clear that such an approach extends to our setting given the different methods. Nevertheless, we agree that this suggests an interesting question for future work, e.g., is it possible to decrease the query complexity up our algorithm to something with subquadratic or near-linear dependence on $n$. We may remark on this in the final version.
The reviewer also points out some editorial comments; we thank the reviewer for pointing these out, and we will fix these in our next version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your response.
I agree obtaining interesting results using simple technical insights is not a bad thing.
I will raise my score by one point, as I think the results presented in this paper will be interesting to the community, even if the proposed algorithms are not practical. | Summary: This paper studies the parallel complexity of submodular function minimization: given a submodular function defined on the $2^n$ subsets of $[n]$ taking values between $-M$ and $M$, find a minimizing subset of $[n]$. It provides two upper bounds: one algorithm with polynomial query complexity in both $M$ and $n$ and taking $O(n^{\frac{1}{3}}M^{\frac{2}{3}})$ rounds, and the other with query complexity $n^{O(m)}$ in two rounds.
Strengths: Writing is clear and convincing, with all relevant technical background and results clearly explained. Related work seems to be cited and placed in context, and open questions are identified in the conclusion. Although I am unfamiliar with the area, the problem seems fundamental and relevant to the conference and machine learning more widely.
Weaknesses: My only (minor) criticism is that it took me a while to understand the difference between the "parallel" setting of SFM and the standard sequential setting. I eventually realized that the number of rounds used is essentially a measure of the *adaptivity* of an algorithm - the more adaptive an algorithm is, the more rounds it needs. I don't think the word adaptivity is mentioned in the paper, so maybe this intuition could be explained slightly better.
Perhaps a couple more sentences on the wider implications of the results for machine learning more broadly might also be nice.
typo, line 253: "then" -> "than"
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Could you clarify what "polynomial time" means in the context of line 26? In this model, we are only interested in query and round complexity, right?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: All limitations seem to be addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to carefully read and review our paper.
In the weakness section, the reviewer points to the clarification between “adaptivity” versus “parallelism”; we agree that explaining this clearly is a good idea, and we will include this explanation in the next version of our paper. The reviewer also asks about the wider implications to ML. In the past decade or so, submodular function optimization has arisen quite in multiple ML applications (see https://arxiv.org/pdf/2202.00132.pdf, for instance), and with the rise of infrastructure for massively parallel computation, it is an important question to understand parallel algorithms for the same, and “low-depth” algorithms would have implications to the ML applications.
In the questions section, the reviewer asks us to clarify the meaning of polynomial time, versus round and query complexity. In the context of SFM, polynomial *time* simply means that the additional computational work apart from the queries takes polynomial time; a little more formally, the total time is $O(n^c)$ additional arithmetic operations for some constant $c>0$. In the context of line 26, we are only talking about sequential algorithms, as we have not introduced the model of parallel computation yet at this point. For sequential algorithms, prior work has studied fast SFM algorithms that have both small query complexity and small additional computational cost.
In the parallel complexity model, we are primarily interested in the parallel round complexity and query complexity, but ideally the algorithm’s additional computational cost (also sometimes referred to as “work” in this community) should be polynomially bounded from its query complexity (which is the case for both of our algorithms). It is true, however, that the focus of our paper was on the round and query complexity. We hope this clarifies the reviewer’s question. | Summary: The paper studies the parallel complexity (number of rounds of queries) for polynomial query-complexity submodular function minimization. The main result of the paper is an $\tilde{O}(n^{1/3} M^{2/3})$ bound for this, where $n$ is the size of the ground-set and the function is integer valued with an upper bound of $M$ on the its absolute value. The approach of this work is to reduce to the problem of minimizing the Lovasz extension (which is $L = 3M$-Lipschitz) over the unit $n$-dim $\ell_\infty$-ball, and then use a simple regularization term along with the $L$-Lipschitzness to reduce it to an unconstrained minimization problem. The algorithm for this optimization is derived from the
recent work of [Carmon et al. 23] who gave a $\tilde{O}(d^{1/3} \epsilon^{-2/3})$-round parallel $\epsilon$-approximation for $1$-Lipschitz (over $\ell_2$) convex function minimization with a minimizer in the $d$-dim unit $\ell_2$-ball and a subgradient oracle. To adapt this to the $\ell_\infty$ case, the paper first shows that a certain Gaussian convolution has better bounds with $\ell_\infty$-Lipschitzness, effectively canceling out the blowup incurred by the $\ell_\infty$ vs. $\ell_2$ norm bound. This, along with the fact that the subgradients of the Lovasz extension can be efficiently computed by queries to the function, yields the result. Additionally, the paper also gives a simple combinatorial 2-round $O(n^{M+1})$ query algorithm for this problem.
Strengths: For small $M$, the main result of work improves upon previous algorithms which required $\Omega(n)$ rounds, and gives the first bound which is sublinear in n, albeit dependent on M, with the dependence on $n$ matching the lower bound of $n^{1/3}$ given by [CCK21]. In the process, the paper extends the work of [Carmon et al. 23] to the $\ell_\infty$ case essentially matching the lower bound [Nem94, DG19] in terms of the dimension. Overall the algorithmic bounds achieved by the paper are notable improvements (and optimal in some parameters) for these problems.
Weaknesses: The main result of the paper is based on the recent result of [Carmon et al. 23]. The fact that SFM can be reduced via the Lovasz extension to optimization over $\ell_\infty$ is well-known and the main result of this work is to adapt the results on optimization over $\ell_2$ of [Carmon et al. 23] to the $\ell_\infty$ case. The techniques for this are fairly straightforward. Overall, the contributions do not provide substantially novel methodology for such problems, and may not be sufficiently insightful or deep.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Editorial comments:
1. Line 81: “an convex function” - - > “a convex function”.
2. Lines 258-259: $\pi_x$ should be used instead of $\pi$.
3. In Sec. 2.2 $d$ is used to denote the dimension, however $n$ is used instead in lines 302, 333.
4. Line 285 mentions “above theorems”, which is unclear. Please refer to the relevant theorems.
5. Line 311-312: refer to Fact 2.4 for the subgradient oracle $g$.
6. Line 335: “subscript” - - > “superscript”.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time to carefully read and review our paper.
In the weakness section, the reviewer mentions how it is well known that SFM can be reduced to $\ell_{\infty}$-Lipschitz optimization. We agree. However, we believe that showing that the $\ell_{\infty}$ Lipschitzness of the Lovasz extension when coupled with our new parallel convex optimization methods tailored to $\ell_{\infty}$ Lipschitz functions lead to improved parallel SFM algorithms is an important new insight. Furthermore, to our knowledge the simple reduction we give from constrained to unconstrained for Lipschitz function was not explicitly known; we believe this may be useful for broader applications in convex optimization.
While we agree that our techniques are somewhat straightforward given the framework in [Carmon et al. 23], our broader contribution is the aforementioned connection we make between parallel SFM and $\ell_\infty$-optimization and the near optimal depth algorithm for $\ell_\infty$-optimization for constant $\epsilon$, these both constitute important progress on well-established open problems. Though the observation that the quality of the Gaussian convolution for smoothing $\ell_\infty$ (as opposed to $\ell_2$) Lipschitz function is straightforward, that we show it leads to near-optimal $\ell_\infty$-Lipschitz convex optimization in any regime is particularly striking. That we make this progress by simple technical insights is not necessarily a bad thing.
In the Questions section, the reviewer suggests many editorial comments. We agree with all of them, and we will incorporate these changes in the next version. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thank you for the response, this mostly addresses my concerns. Please add the explanations provided in the rebuttal to the paper as well. I have raised my rating by one. | Summary: This paper considers the submodular minimization problem (SFM), where the submodular function $f$ is bounded between $-M$ and $M$. There has been a large number of papers that focus on solving this problem in as few queries as possible, but the proposed algorithms are generally highly sequential (at least $\Omega(n)$). On the other hand, there is a lot of work on the parallel complexity of SFM. The contribution of this paper is to propose and analyze algorithms with low parallel complexity.
They first propose an algorithm for SFM that runs in $\tilde{O}(n^{1/3}M^{2/3})$ rounds. This algorithm uses a reduction from parallel SFM to parallel convex optimization in order to achieve this. They provide an improved method for parallel convex optimization to be used for their algorithm. Next, they propose an algorithm that is just 2 rounds but with query complexity $n^{O(M)}$, which is optimal if $M$ is constant. This second algorithm is very simple and clean.
Strengths: - The problem setting is very well motivated since SFM includes many applications that involve large datasets and therefore parallel algorithms are needed. This paper also fills an interesting gap in existing literature on SFM, where there is work on the parallel complexity but no algorithms that run in few rounds.
- Very strong theoretical results and ideas, which could be useful for other problems.
- Extremely clear and well-written.
Weaknesses: - There is no experimental evaluation, but this paper is a theory paper so it is not much of an issue.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a careful reading of our paper and writing the review.
In the weakness section, the reviewer points to the lack of experiments. As the reviewer also notes, the main focus of the paper is exploring algorithms for SFM and $\ell_{\infty}$ convex optimization through a theoretical lens. | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: In this paper, the author(s) investigate the classic submodular minimization problem within the parallel computing regime. The setting is an integral submodular function $f$ defined on $2^{[n]}$ which is bounded ($|f| \le M$) and normalized ($f(\emptyset) = 0$. There primary contribution is obtaining query-efficient $M$-dependent algorithms with sublinear in $n$ rounds. This is achieved by regularizing the Lov{\'a}sz extension of the submodular function, and then using some prior results on parallel convex optimization. They also proving a 2-round $O(n^{M+1})$-query algorithm for submodular minimization.
Strengths: The motivations and background to the results are clearly stated. The proof ideas are clearly written.
Weaknesses: Page 2, line 69 capitalize section 2.1.
Page 2, line 70 minmizer misspelled.
Page 3, line 114 capitalize theorem 1.1.
Page 5, line 225 comma needed after "As discussed in the introduction.
Page 6, Lemma 2.3 any $c,x \in \mathbb{R}^n$ I believe should be $c,r \in \mathbb{R}^n$.
Page 9, line 359 in Algorithm 1 line 4, computing $A(S)$ may require an input of size $M + 1$, i.e., when $|S| = M$. The proof on this line only requires inputs of size $M$.
Page 9 line 378, $f(S_* \cup j)$ change to $f(S_* \cup \{j\})$.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: I have no further questions then the potential confusions stated in the weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The author(s) clearly state the assumptions needed for their algorithms and end with some potential open questions on how their work may be improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a careful reading of our paper and writing the review.
In the weakness section, the reviewer points out several typographical errors; we agree with them all, and will fix them in the next version. For line 359 specifically, indeed what we should have stated is that the size of all sets queried in Line 4 is at most $M+1$ not $M$ as initially stated. This still allows for the algorithm to be implemented in 2 rounds and does not change the query complexity of our algorithm.
Thank you! | null | null | null | null | null | null |
A Unified Model and Dimension for Interactive Estimation | Accept (poster) | Summary: This paper studies a general interactive learning setting, which is termed "interactive estimation". In this model, we are given a set of alternatives, which contains the target. At each step, the learner can choose an alternative and receive the similarity of this alternative with the target. Based on this input, the goal is to get as close as possible to the target. The authors then demonstrate that many well studied settings are special cases of this general framework, including structured bandits and SQ learning. Then, they introduce a novel combinatorial parameter which they call "dissimilarity dimension". They prove that a general algorithm which, under reasonable assumptions, has regret that is bounded by a polynomial of the dissimilarity dimension. Furthermore, they show that the dissimilarity dimension is polynomially related to the strong SQ dimension, which automatically gives lower bounds in the sample complexity in terms of the dissimilarity dimension. Lastly, they prove that the dissimilarity dimension is upper bounded by the Eluder dimension and give concrete examples where the gap between the two is quite wide, which yields improved regret analysis for some bandit settings.
Strengths: Overall, the contributions of this paper are novel and highly relevant for the learning theory community.
In terms of results, both definitions of the interactive estimation setting and the dissimilarity dimension are valuable contributions of this work. It is interesting that one can define such a general setting that encompasses various seemingly unrelated problems and still provide a meaningful combinatorial measure of complexity that characterizes learnability under general conditions. The fact that studying the dissimilarity dimension results in tightening various existing results about specific bandit settings is also an important feature of this work, and it seems to open up unexplored directions of research in this well-established field. Another important aspect of this work is that it provides an algorithmic template, which can be instantiated using various algorithms that exist in the literature.
On the technical side, the main proofs contain novel and non-trivial ideas, such as the idea of using Turan's Theorem to bound the number of bad queries in terms of the dissimilarity dimension in Lemma 5.
In terms of presentation, the paper is very well written, with all key ideas explained clearly and succinctly. It is also nice that the authors try to give intuition for the important ideas that are involved in proving the Theorems, such as the intuitive reason for why the dissimilarity dimension is "tighter" than the Eluder dimension (before Proposition 12).
Weaknesses: The main weakness in terms of results is the decaying estimation assumption, which is crucial in order for Algorithm 1 to work. The authors provide several alternative assumptions which imply the decaying estimation, such as the existence of an online regression oracle with sublinear regret, or bounded covering numbers for the set of alternatives. While these are reasonable assumptions that have been used in many settings in prior work, it would be interesting to see whether they can be relaxed, or whether they are somehow related to the assumption of boundedness of the dissimilarity dimension. Right now there is no discussion in the paper about that relation.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: -One question, which is also briefly discussed by the authors, involves the computational complexity of the proposed algorithms. In particular, are there some interesting conditions for the similarity measure $\rho$, under which the least squares optimization of Algorithm 1 can be implemented in polynomial time?
-It is implicitly assumed that for any given $z_1,z_2 \in \mathcal{Z}$, the algorithm can compute $\rho(z_1|z_2)$. This is certainly the case in the specific settings of SQ learning and structured bandits, which are studied in the paper, but one could imagine situations where that computation is not possible. Would it be possible to impose some conditions on $\rho$ which still enable learning of the target, without having access to an evaluation oracle for $\rho$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes, the limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for their overall feedback and suggestions.
- “Decaying estimation” - First, we note that the analysis of the eluder dimension [24] exhibits a similar dependence: their upper bound relies on both the eluder dimension and the log of the covering number. As the reviewer points out, we also offer various alternative assumptions that imply the decaying estimation property. Nonetheless, we do not dismiss the possibility that the boundedness of the dissimilarity dimension could potentially relax the need for such additional assumptions. We leave this question as an interesting future work.
- The case that rho cannot be computed is an interesting setting to be considered. In this case, the learner cannot access the value of rho, and thus cannot directly exploit the connections (or “dissimilarity”) between different functions in the functions class, as we do in our framework. It seems to be akin to zeroth-order optimization. Although it is beyond the scope of this work, it is an interesting problem to study within the interactive estimation framework.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My questions have been adequately addressed. | Summary: In this work the authors introduce a framework for interactive estimation, where the learner interacts with some oracle that provides information in the following way: the learner submits a query $z$ and the oracle returns a stochastic reward whose mean is given by a measure of dissimilarity between the query and the ground truth. The main benchmark they care about is the regret guarantees that the learner can achieve. The authors propose a dimension which they call the dissimilarity dimension whose finiteness provides a sufficient condition for learnability in this model. They also propose an algorithm based on the least-squares method that achieves sublinear regret when the dissimilarity dimension is finite. Then, they also describe an online-to-batch conversion that can be used to establish learning guarantees in the offline setting. Moreover, they show how their framework can be instantiated to capture the SQ learning setting and the structured bandit setting. Finally, they illustrate learning tasks for which their approach yields sharper bounds than the state of the art.
Strengths: * The authors are studying fundamental learning problems and any new perspectives which lead to better results in this line of work should be welcome.
* The dissimilarity dimension can be viewed as a more localized version of the eluder dimension, which as the authors show, can lead to better bounds than the latter.
* It is always interesting to try to unify settings, but I have some doubts about this unification (see Weaknesses/Questions).
* Even though the techniques build upon the prior work, they are not trivial.
* This dimension could have applications in other settings as well.
* The paper is well-written and easy to follow, I particularly enjoyed the comparison with the eluder dimension and the explanation about the improvement that the dissimilarity dimension yields.
Weaknesses: * My main concern is that the unification that the authors propose under this framework feels a bit forced and not very natural. In particular, in the model the authors propose the learner queries a point $z_t$ in every round $t$ and gets a stochastic reward whose mean is $\rho(z_t|z^\star)$, where $z^\star$ is the ground truth and $\rho$ can be thought of as a dissimilarity measure. In order to get the results in the bandit setting, they let $z = (a, f)$, where $a$ is some arm and $f$ is some reward function that maps arms to rewards. This does not feel very natural because the traditional way to think of this setting is that the learner queries arms, and not arms and functions. Moreover, the way the distance function is defined is $\rho_{bandits}((f,a)|(f^\star,a^\star)) = f^\star(a)$, so we can see that both $a^\star, f$ do not affect its value. It seems as if it was defined this way in order to make the definition of the dissimilarity measure work, but it does not exactly convey the message it was supposed to. Am I missing something?
* Line 50: "Largely characterizes learnability in our setting". This is a bit vague and inaccurate. I don't understand what it means for a dimension to "largely characterize" learnability, since it either characterizes it or it doesn't. What you have shown is that finite dissimilarity dimension implies learnability, and that there are instances with infinite such dimension that are not learnable. However, there could be problems whose dissimilarity dimension is infinite, by they are still learnable.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: * What is the intuition behind the scaling of the distance in definition 1 by $\sqrt{d}$?
* The example in Proposition 12 is certainly interesting, but are you aware of some more "natural" settings where the dissimilarity dimension can be finite whereas the eluder dimension is infinite?
* Can you derive logarithmic regret bounds that depend on some notion of suboptimality gap? For instance, in the context of bandits that would be the gap between the reward of the best arm and the second best arm.
* Can the results be extended to contextual bandits and RL? A long line of work has established strong connections between the eluder dimension and regret bounds in various RL settings.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall feedback and suggestions.
- “My main concern… not very natural. …” - Note that we are *not* suggesting a new protocol for bandit learning. Instead, we are presenting a reduction from bandit learning to our setting, and using that to obtain a tighter analysis of optimistic algorithms for bandits.
Although the protocol where the learner submits pairs (f,a) seems more complicated than the standard bandits protocol, we believe that it is a fairly natural fit for many algorithms for realizable bandits. Such algorithms often keep track of a version space (set of functions f consistent with data), and so at each point of interaction, there is an implicit f_t that is associated with a_t produced at that time. Our protocol is just making this f_t explicit. We will add discussion of this intuition to the paper.
- “Largely captures” - We agree with the reviewer. We will clarify that we show a general upper bound as well as lower bounds for certain instances, though not a complete characterization.
- “intuition behind … sqrt(d) “ - This is a great question. The sqrt(d) factor simplifies the comparison between the dissimilarity dimension and eluder dimension (e.g. Thm. 11 and Prop. 12). However, it is possible to replace sqrt(d) by a general growth function g(d) and then state the bounds in terms of the function g. We have not been able to take advantage of that generality, but we will mention it in the paper (and leave open for future research).
- “Example in Proposition 12” - The key aspect of the example is the fact that the function class is simple near the optimal action, but can be complicated far from it. We expect this to occur in a wide range of cases. As a simple example, consider non-zero linear functions on the [0,1] interval. Clearly the optimal “arm” would always be on the boundary, either 0 or 1. Even in this simple case we can “inject” as many bad points as we want into the linear functions some distance away from the optimum. This will “fool” the eluder dimension which can be arbitrarily large, but the dissimilarity dimension will remain constant. The choice of the circle example in proposition 12 demonstrates this happens even when all arms are optimal for some function.
- “suboptimality gap” + “contextual bandits and RL” - Both of the last questions posed by the reviewer are very interesting ideas that are beyond the scope of this work, and could be explored in future research.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response. I did understand that you weren't suggesting a new protocol for bandit learning and this was only part of the analysis -- apologies if that wasn't clear in my review. I still find this unified model a bit less satisfactory than I was expecting when I first read the abstract and the introduction, but of course this is something very subjective. I think the dimension you propose is very interesting in its own, especially since it provides tighter bounds than the well-studied eluder dimension. I remain positive about the paper. | Summary: The paper introduces a learning model where an unknown target exists, and in each round, the learner selects a choice and receives a stochastic feedback indicating the similarity between the choice and the target. The objectives examined involve minimizing regret or providing a PAC-style guarantee, ensuring that a choice with approximately maximum similarity to the goal is returned with high probability. This model encompasses both bandits and correlational statistical query algorithms.
The paper introduces the concept of "dissimilarity dimension", a combinatorial measure that captures the learnability within the model. It proves that the dissimilarity dimension implies learnability and can be achieved by any algorithm satisfying two properties: (i) selecting choices with high similarity to themselves and (ii) improving the quality of reward estimates over time. The paper presents a simple algorithm that satisfies these properties, demonstrating that it achieves bounded regret and a PAC generalization guarantee.
In terms of the connection to the statistical query (SQ) model, the paper establishes that the dissimilarity dimension is polynomially related (both in upper and lower bounds) to the “strong SQ dimension”. Additionally, regarding the connection to bandits, the paper demonstrates that the dissimilarity dimension is upper bounded by the eluder dimension, which is commonly used in bandit problem literature. The paper provides concrete examples of bounding the dissimilarity dimension for bandit problems and illustrates cases where the dissimilarity dimension is significantly smaller than the eluder dimension.
Strengths: The paper introduces a novel learning protocol and presents a combinatorial dimension that captures learnability. Furthermore, it establishes connections between this dimension and two extensively studied learning models. The fact that the proposed dimension can be much smaller than the existing eluder dimension is particularly interesting and perhaps surprising result. The work is original and novel, and I would be happy to recommend acceptance.
Weaknesses: I do not see any strong weaknesses.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I was under the impression that the results in Szörényi 2009 hold for real-valued hypotheses as well. Is there a particular reason why Section 4 restricts to binary classifiers?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall feedback and suggestions.
- “Szörényi 2009 - real-valued functions?” - Perhaps we are missing something, but it seems that in the Preliminaries section of [25], Szörényi defines a “concept” to always be a mapping from the domain to binary labels {-1, 1}. The proof he gives on the upper bound of the Strong-SQ Dimension is also relying on that assumption.
- In addition, as far as we know, standard SQ models only deal with binary labeled hypotheses (see survey [23]). In another extension of the classic model to distribution learning, the queries themselves remain binary (e.g Definition 26 in [23]), leading to 0/1-loss instead of other losses for real-valued hypotheses. In contrast, our framework encompasses real-valued hypotheses learning, akin to linear bandit learning, as suggested by our model. | Summary: This paper presents a novel unified framework for interactive estimation. The framework involves a learner making repeated queries to the environment and observing rewards that correspond to the similarity between the query and an unknown target. The algorithm then uses these observed signals to estimate the target. This general framework for interactive learning is connected to two instantiations of learning models: the SQ model and structured bandits, which are studied in this paper. Additionally, a new combinatorial dimension is proposed to measure the difficulty of learning, and it is used to offer standard analysis to achieve tighter regret bounds.
Strengths: The paper has several notable strengths. It proposes a new unified framework for interactive estimation and a novel dimension called dissimilarity dimension to measure hardness. It connects two classical frameworks and offers the analysis with them, then finds with their proposed dimension, tighter regret bound can be achieved. Furthermore, it shows another possible to rely on the proposed dissimilarity measure.
Weaknesses: Here are some potential improvements to the original content:
1. The writing sometimes is too abstract to understand the concept (Introduction) and hard to follow (Section 3 Algorithm). It can be beneficial if more explanation and practical settings are provided.
2. The completeness of the framework can be further enhanced through additional empirical experiments.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Why do we need a new dissimilarity measure and the intuition behind it? I am curious about the difference between it and typical approach as dissimilarity measurement.
2. Since this framework claims to be unified to many settings, is there any experiment we can validate its universal feature?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper introduces a unified framework for interactive learning that has the potential to be applied in various scenarios. It also establishes connections with two classical models to demonstrate its impact and applicability. The completeness of the framework can be further enhanced through additional empirical experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their overall feedback and suggestions.
- “Dissimilarity measure” - To clarify, it is not our goal to quantify dissimilarity. Instead, our goal is to quantify learning complexity. As in similar approaches for characterizing learnability (e.g., the classical VC dimension for PAC learning, the Littlestone dimension for online learning, etc.), we introduce a new dimension that will quantify learnability in settings of interactive estimation, such as bandits and SQ. Only the *name* of the dimension is dissimilarity, chosen merely as an intuition for its definition.
- “Experiments” - we emphasize that our main contribution is the theoretical framework characterizing learning complexity for interactive learning settings. While there is an algorithmic aspect to our work, it is not the primary focus of the paper. The proposed algorithm is, in fact, an abstraction of existing algorithms for well-studied specific settings, such as multi-armed bandits (see Appendix D for details). We believe that the real significance of our work lies in the conceptual unification of classically studied learning settings. Thus, it provides a new analytical perspective for various learning scenarios. Notably, our approach yields an improved analysis of existing algorithmic approaches in bandit learning.
---
Rebuttal Comment 1.1:
Title: Questions
Comment: Dear Reviewer,
We would like to know if this explanation made sense? If it did and it addresses the reviewer's concerns, we would very much appreciate if the reviewer could reassess their score to one that reflects this renewed understanding. We hope the discussion with other reviewers has also helped address any lingering concern.
Thanks a lot! | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Bicriteria Approximation Algorithms for the Submodular Cover Problem | Accept (poster) | Summary: This work focuses on designing approximation algorithms for the submodular cover problem. In this problem, we are given a submodular function $f$ over subsets of a ground set $U$, and the goal is to find the smallest subset $S$ such that $f(S)$ is greater than or equal to a given threshold. The authors present a fast algorithm for monotone functions with similar approximation guarantees as greedy, while being almost a factor of $n$ faster. They improve the approximation guarantee with an algorithm that runs in exponential time with respect to the size of the optimal solution. They extend their results to non-monotone functions and achieve a high-quality solution with exponential running time.
Furthermore, they consider the regularized submodular cover problem, which is similar to the submodular cover problem minus an additional modular function. They are the first to study this problem and present bicriteria algorithms for this problem. Moreover they compare their algorithms with baselines in multiple experiments.
Strengths: The authors' work makes significant contributions to the field of submodular cover optimization. Their algorithms are simple and easy to understand, and the results are explained clearly. The authors also provide a fair comparison with previous works. The experimental section is well-organized and the datasets used are commonly used in submodular maximization publications in ML conferences. The results show the strengths and weaknesses of this work, and support the theoretical guarantees.
Weaknesses: First, the novelty of the work is not clear to me. The algorithms and proof techniques are similar to previous works and are well-known in the literature. Second, exponential time algorithms are not practical in most cases, and I do not agree with the reasons provided in this work. Third, the results for Regularized SCP are good, but less interesting compared to the rest of the results in this work.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please provide details if the weaknesses mentioned above are not accurate.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments on the paper.
In this paper, we do use a variety of techniques inspired by those found in literature on the cardinality constrained submodular maximization problem (SMP). However, a few examples of components of our algorithms and analyses that are particularly distinct from existing approaches for SMP are: (i) In many algorithms for SMP, the size of the optimal solution is known to be beneath an input budget. This value is used in many algorithms including ones that inspired our algorithms such as the stochastic greedy algorithm of Mirzasoleiman et al. (inspired stoch-greedy-c), and the streaming algorithm of Alaluf et al. (inspired stream-c). In stoch-greedy-c we use a method of adaptively guessing the size of the optimal solution $|OPT|$ throughout the algorithm in order to determine how many samples are needed at each iteration. (ii) Theoretical guarantees for the regularized submodular maximization problem [Harshaw et al.] take an unusual form that is not a typical approximation guarantee. In order to convert these algorithms to ones for the RSCP (convert-reg), we had to use a technique of distorting the function $f$ and then giving it to a regularized submodular maximization subroutine.
We agree that exponential time algorithms are usually impractical. As described in the paper in Section 2.2, it has previously been shown that for general SCP when $f$ is not assumed to be monotone it is not possible for an algorithm to guarantee that its returned solution satisfies $f(X)>\tau/2$ in polynomially many queries of $f$ assuming the value oracle model [Crawford, 2023]. Therefore, in order to have $f(X)$ be nearly feasible, i.e. reach $(1-\epsilon)\tau$, we propose stream-c which requires an exact solution to submodular maximization on an instance of size $O(|OPT|/\epsilon^2)$. When $|OPT|$ is small or in other restricted settings, stream-c may be a reasonable choice.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. | Summary: The paper studies bicriteria approximation algorithms for sub-modular cover problem (SCP). In the problem, we are given an oracle to some sub-modular function f, and a threshold tau. Our goal is to find the smallest set X such that f(x) >= tau.
The paper is not well-written at all. There are critical typos in the theorem statements, making it hard to understand what the main results are. Only after reading the rebuttal and the supplementary materials carefully, I was able to put all the pieces together. Two main results of the paper are
For the monotone SCP problem (MSCP), the result is an algorithm that achieves O(ln(1/eps))-approximation with 1-epsilon violation on the value, using O_eps(n log n) queries. This improves the previous query complexity of O(n^2). However, the algorithm is called threshold-greedy-c, described in Algorithm 5 in the supplementary material. But the theorem statement says the algorithm is threshold-bi, which is never defined.
The second result is for SCP. It says there is an algorithm that achieves 1/epsilon approximation with 1-epsilon violation on the value. The result uses theorem 2, which converts a randomized algorithm for the dual problem SMP into a randomized algorithm for SCP. The algorithm needs to guess a budget g, and repeat the algorithm for SMP multiple times in order to convert the expectation guarantee into high probability guarantee. However, in the theorem statement, gamma should be epsilon as pointed by the authors in the rebuttal, and alpha is never defined. From the supplementary material, I learnt that alpha is the parameter controlling the multiplicative step size of the guessed size budget g, and the term log |OPT| in the running time should really be log_{1 + alpha} |OPT|.
I am raising my score to borderline reject. There are so many typos in the theorem statements. Yes, they are just a few typos in the whole paper, but I think mistakes in the main theorems, which give the formal descriptions of main results, are intolerable. They are the first things that people will read, and they only take a few lines in the paper.
If I needed to judge the contribution of the results, without taking the typos into consideration, I would say the first result makes a fair improvement, but I do not see too much technical contribution from the second result. It is based on the dual relationship between SCP and SMP, and conversion from expectation to high probability comes from repeating the algorithm multiple times.
Strengths: The paper improved the query complexity for the MSCP algorithm from O(n^2) to O(n log n).
Weaknesses: The paper is not written well. There are critical typos in the statements of the main theorems.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: No questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and comments on the paper. We address each concern and question below.
(1) "The authors stated that they have an algorithm that achieves the best approximation guarantee with O(n ln(n)) queries. I do not see any proof of the result."
We will explain this point more clearly in the next version of the manuscript. It is stated in the paper (see contribution (i) in the introduction) that we provide two algorithms for MSCP that achieve nearly the same theoretical guarantees as the greedy algorithm but in $O(nln(n))$ queries of $f$. The two algorithms that make $O(nln(n))$ queries to $f$ are threshhold-bi and stoch-greedy-c. The theoretical guarantee for threshold-bi is presented in Theorem 1, which states that the number of required queries is $O(\frac{n}{\epsilon}\log(\frac{n}{\epsilon}))$. If we fix $\epsilon$, then the number of queries is $O(n \ln(n))$. The theoretical guarantee for stoch-greedy-c is presented in Theorem 3, which states that the number of required queries is $O\left(\frac{\alpha}{1+\alpha}n\ln(1/\delta)\ln^2(3/\epsilon)\log_{1+\alpha}(|OPT|)\right)$. If we fix $\alpha$, $\delta$, and $\epsilon$ then the number of queries is $O(n \ln(n))$.
(2) "For the problem with (1-epsilon)-approximation allowed on the f(S) value, do you get an O(1/epsilon), or an O(ln(1/epsilon))-approximation guarantee? In result (ii), the size of S is OPT/epsilon. But in Theorem 1, the approximation ratio is ln(2/epsilon) = O(ln(1/epsilon))."
The two approximation ratios that you describe are for different problem settings. If we assume that $f$ is monotone, we get the stronger $O(\ln(n))$ approximation guarantees proven for threshold-bi and stoch-greedy-c (Theorems 1 and 3). If we do not assume that $f$ is monotone and therefore consider the more general setting, then the approximation guarantee we have is the weaker $O(1/\epsilon)$ guarantee proven for stream-c in Theorem 4. All of the algorithms described above achieves a $(1-\epsilon)$-approximation on the f(S) value (see Theorem 1, Theorem 3 and Theorem 4). There is a typo that may have contributed to the confusion, Theorems 1 and 2 are in the monotone submodular section but say "SC"/"SCP" instead of the correct "MSCP".
(3) "Finally, what is the purpose of Theorem 2? How does it improve the result in Theorem 1? What does gamma mean in the theorem statement?"
The purpose of Theorem 2 is to provide a method for converting randomized algorithms for the dual problem SMP into ones for SCP. In particular, we are interested in using Theorem 2 to convert the stochastic greedy algorithm for monotone SMP [Mirzasoleiman et al.] into an algorithm for MSCP.
We actually made a typo with the $\gamma$ and it should be replaced with $\epsilon$. Thank you for bringing this to our attention. The corrected version of Theorem 2 appears in the supplementary material on page 14 of our submission.
The method described by Theorem 2 gives results beyond those of Theorems 1 and 3 for monotone SCP. In particular, consider the stochastic greedy algorithm of Mirzasoleiman et al., which is a randomized algorithm for SMP that returns a solution set $S$ satisfying $f(S)\geq(1-1/e-t)f(OPT)$ and $|S|=\kappa$. Then in Theorem 2, $\epsilon=2/e+2t$ and $\beta=1$. Then by using the algorithm convert-rand (Algorithm 6 in the appendix) Theorem 2 states that we have a $(1+\alpha, 1-2/e-2t)$-bicriteria that holds with probability $1-\delta$, where $\alpha,\delta$ are input. This is incomparable to the guarantees given in Theorems 1 and 3 since the first part can get arbitrarily close to 1 but the second part, i.e. the guarantee on the feasibility, is relatively weak. In contrast, the approximation ratio on $f$ for both Theorem 1 and Theorem 3 is $1-\epsilon$ and can get arbitrarily close to 1. We will more explicitly describe this example in the next version of the manuscript.
---
Rebuttal Comment 1.1:
Comment: After reading the paper and supplementary materials more carefully, I am able to put all the pieces together. Other than the results for regularized SCP and the empirical study, there are two main results given in the paper.
The first result (i) is for MSCP. It says there is an algorithm that achieves O(ln(1/eps))-approximation with 1-epsilon violation on the value, using O_eps(n log n) queries. This improves the previous query complexity of O(n^2). The algorithm is threshold-greedy-c, described in Algorithm 5 in the supplementary material. But the theorem statement says the algorithm is threshold-bi, which is never defined.
The second result is for SCP. It says there is an algorithm that achieves 1/epsilon approximation with 1-epsilon violation on the value. The result uses theorem 2, which converts a randomized algorithm for the dual problem SMP into a randomized algorithm for SCP. The algorithm needs to guess a budget g, and repeat the algorithm for SMP multiple times in order to convert the expectation guarantee into high probability guarantee. However, in the theorem statement, gamma should be epsilon as pointed by the authors in the rebuttal, and alpha is never defined. alpha controls the precision of the guessed size budget g, and the running time should have the term log_{1 + alpha} |OPT|.
---
Reply to Comment 1.1.1:
Comment: We do have a number of different results on several problems, and so we plan on including a table in the next version of the manuscript that makes each contribution and how they fit in more clear. We would like to clarify the main contributions of the paper as follows:
(1) We have three main results for the monotone submodular cover problem (MSCP) in Section 2.1. These results only apply for MSCP. The first is the algorithm threshold-greedy-c (Algorithm 5 in the appendix) and its guarantees in Theorem 1. The second is the converting method convert-rand that takes randomized algorithms for monotone submodular maximization and converts them into ones for MSCP (Algorithm 6 in the appendix) and its guarantees in Theorem 2. The third is the algorithm stoch-greedy-c (Algorithm 1 in the main text) and its guarantees in Theorem 3. The theorem statements will be edited in order to make it more clear that they apply only to monotone submodular functions (it currently says monotone both in the text and section headings, but some of the theorem statements are unclear). We believe our most interesting result from this section is stoch-greedy-c, which is inspired by the stochastic greedy algorithm for monotone SMP [Mirzasoleiman et al., 2015a] but we use a method of adaptively guessing the size of the optimal solution throughout the algorithm to propose a new sample-efficient algorithm for MSCP.
(2) We have one main result for the general submodular cover problem (SCP), where $f$ is submodular but not necessarily monotone. This is the algorithm stream-c (Algorithm 2) and its guarantees in Theorem 4.
(3) Finally, we have a result for the regularized monotone submodular cover problem (RSCP), which does not fall under the setting of SCP since the objective may take on negative values. We propose a method of converting algorithm for the regularized monotone submodular maximization problem, convert-reg (Algorithm 3 in the main text) and its theoretical guarantees are in Theorem 5. We then propose the algorithm distorted-bi for regularized monotone submodular maximization that produces different approximation guarantees compared to existing ones in the literature (see Section 2.3 in the paper for more details on this), in order to be used by convert-reg to produce an algorithm for RSCP.
In addition, "threshold-bi" is meant to say threshold-greedy-c. We will fix this additional typo in the next version of the manuscript. The alpha in Theorem 2 is an input parameter which we also mentioned in the corresponding pseudocode convert-rand (Algorithm 6 in the supplementary). We will clarify it more clearly in Theorem 2. The reviewer is correct that the term $\log(|OPT|)$ in Theorem 2 should say $\log_{1+\alpha}|OPT|$. We want to point out that most of the typos and ambiguity is coming from Theorem 2 and its corresponding algorithm convert-rand. We believe part of the problem is that much of the information on convert-rand is in the supplementary material. We plan to fix the typos as well as add more discussion about convert-rand and its theoretical guarantees in the main paper. | Summary: This paper proposes several bicriteria approximation methods for solving the standard monotone submodular cover problem (SCP). Additionally, the authors also propose new variants of the problem: removing the monotone assumption, producing nearly feasible solutions, and adding regularization via a modular cost function. The general methodology involves novel conversions from common bicriteria algorithms for the (dual) submodular maximization problem (stochastic greedy, threshold greedy, distorted greedy) into ones for SCP.
Strengths: - Significance: This paper uses a powerful framework which leads to not only novel algorithms but also new problem formulations. Expanding the scope of SMP/SCP conversion is of independent interest and has high potential impact
- Originality/Clarity: Novel combination of previous work and new ideas (discussed and cited clearly)
Weaknesses: - Experiments: The impact of the regularized SCP problem would be more significant if the paper included experimental results comparing convert-reg and distorted-greedy-bi. The paper would be improved if it provided additional motivations/applications for each SCP formulation. When is SCP preferable to SCM?
- Typo: distorted algorithm is distorted-greedy-bi in the main paper and distorted-bi in the appendix
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - What are some applications of regularized SCP?
- Are any bicriteria approximations or query complexity results optimal, or can they be improved further?
-----
EDIT: I have read the author rebuttal, and it addressed my questions sufficiently.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: No discussions of limitations or broader impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and comments. We address each of your questions below.
- "What are some applications of regularized SCP?"
Many applications of the well-studied regularized submodular maximization problem (SMP) [Harshaw et al., 2019] are also suitable applications for RSCP, just with a shifted emphasis from keeping within a budget (the constraint in regularize SMP) to achieving a certain value of $f$ (the constraint in RSCP). As an example, consider the profit maximization setting where $f=g-c$ and $g$ models the revenue from a group of products while $c$ models the price of producing them. Then RSCP would formalize the problem of achieving a certain amount of revenue ($\tau$) using the minimum amount of products. In addition, many existing applications of SCP such as data summarization [Mirzasoleiman et al., 2016] or influence in a social network [Crawford et al., 2019] could be extended to RSCP by adding a modular penalty/cost for including each item into the solution set.
- "Are any bicriteria approximations or query complexity results optimal, or can they be improved further?"
It was proven by [Feige, 1998] that set cover cannot be approximated efficiently below a threshold of $(1-o(1))\ln(n)$ subject to the condition that NP has slightly superpolynomial time algorithms. Set cover is a special case of the general monotone submodular cover problem, and therefore this result applies to MSCP as well as SCP. If we fix the parameter $\epsilon=1/n$, then the algorithms threshold-bi and stoch-greedy-c presented in our paper achieve nearly feasible solutions for MSCP with approximations of $\ln(2n)+1$ (Theorem 1) and $(1+\alpha)\lceil \ln(3n)\rceil$ (Theorem 3) respectively. Therefore these algorithms have bicriteria guarantees that are somewhat close to the best possible. However, we have not yet been able to determine the optimality of our results beyond this.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: The authors have addressed my questions sufficiently. | Summary: The paper considers the Submodular Cover Problem (SCP), where one
is given a submodular function f through an oracle that returns the
value f(A) for each subset A of the underlying set U (over which
the function f is defined). The goal is to find a minimum cardinality
subset X of a set such that the value f(X) is >= a given threshold
\tau and |X| is a minimum. Since this problem is NP-hard in general,
researchers have considered bicriteria approximation algorithms,
where the size of the size of the returned subset X is within a
small factor of the size of a minimum subset and the value f(X) is
at least a suitable fraction of the given threshold. While some
such approximation algorithms are available (using a dual problem),
they may use quadratically many queries to the oracle. The focus
of the paper is to develop methods whose approximation guarantees
are close to known results but which use significantly fewer queries
to the evaluation oracle. Results are provided for general SCP as
well as some restricted versions.
Strengths: (1) The paper presents new methods for SCP with significantly smaller
numbers of queries to the function evaluation oracle without significantly
affecting the performance guarantees. To this reviewer's knowledge, this
is the first work that achieves this goal for SCP. Given the importance
of submodular optimization in ML these results represent a useful advance.
(2) The paper nicely summarizes prior work on the topic.
Weaknesses: A (very) minor weakness is that it takes a fair amount of time
to understand the details regarding the algorithms. (This is due to the
nature of the problem.)
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: (1) In the results for the general SCP (lines 50--54, page 2), it is
mentioned that the algorithm does not run in polynomial time in general.
Is there a known hardness result which precludes a polynomial
time algorithm for this case (under some well accepted hypothesis in
complexity theory)?
(2) Lines 98--100 in Section 1.2 discusses a result from {Crawford, 2023].
This result (as described) says that one "cannot have an algorithm for SCP
which can guarantee that f(X) >= \tau/2 using only polynomially many queries".
Does this result rely on an underlying complexity hypothesis (such as P != NP)?
Some minor suggestions to the author(s):
(a) In stating the results for RCSP (lines 56--62 on page 2), please
consider indicating the number of queries to the oracle.
(b) Line 65: "make a large" ---> "provides a large"
(c) Line 120 (page 3): "about queries" ---> "with respect to the
number of queries"
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments about the paper. We will update the manuscript to add your suggested modifications. We address your questions below.
(1) The result we are aware of for SCP is that under the value oracle model of access for $f$, meaning that $f$ is only accessed as a black box that returns $f(X)$ given $X$, it is not possible for an algorithm to produce a solution $X$ with the guarantee that $f(X)>\tau/2$ in polynomially many queries to $f$ [Crawford, 2023]. Our algorithm stream-c guarantees $f(X)\geq (1-\epsilon)\tau$ for any $\epsilon > 0$, but on the other hand is not guaranteed to make polynomially many queries to $f$.
(2) The result of [Crawford, 2023] does not rely on any complexity hypothesis, but rather applies to any algorithm under the value oracle model. For more intuition about hardness results for submodular optimization problems assuming the value oracle model access to $f$, we refer the reviewer to Section 4 of [Feige et al., 2011]. In fact, the result that we cite from [Crawford, 2023] for general SCP is closely related to Theorem 4.5 in this paper.
---
Rebuttal Comment 1.1:
Comment: I have gone through the rebuttal. My questions/concerns have been addressed satisfactorily. | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Joint Prompt Optimization of Stacked LLMs using Variational Inference | Accept (poster) | Summary: The paper introduces a framework called Deep Language Network (DLN) that involves stacking multiple large language models (LLMs) as stochastic layers in a deep network. The prompts at each layer serve as tunable parameters, and the output of one layer is fed as input to the next layer. The LLMs are trained jointly using variational inference. The DLN architecture achieves higher performance than a single layer and can sometimes match the performance of larger and more powerful models like GPT-4. The authors discuss the analogy between LLMs and traditional parametric and nonparametric models, explore the limitations of LLMs, propose the use of variational inference for training DLN, and demonstrate the performance of DLN on various datasets. They suggest that DLN can serve as a framework for characterizing and optimizing prompt learning techniques. The paper also discusses future directions such as testing with different language models, fine-tuning stackable LLMs, and expanding DLN to accommodate arbitrarily directed acyclic graphs. The authors hope that DLN and modular approaches like theirs will address the challenges associated with LLMs and make them more adaptable to different use cases. However, they acknowledge the limitations of technical solutions and the need to consider deployment and ethical considerations when using such models in real-world applications.
Strengths: The paper introduces the concept of Deep Language Networks (DLNs), which stack multiple Large Language Models (LLMs) and train them jointly using variational inference. This approach offers a new perspective on leveraging the power of LLMs and demonstrates improved performance compared to a single-layer model. It presents a layered architecture that decomposes the task into a series of smaller sub-tasks, each of which is more easily solvable by an LLM. This decomposition allows for more efficient training and better performance. It details prompt engineering and in-context learning techniques for optimizing the prompts associated with each layer of the DLN. These techniques enable fine-tuning of the DLN's performance and allow for better adaptation to different tasks and datasets. The paper utilizes variational inference to learn the prompts in the DLN. This approach enables the joint search over the prompts, addressing the challenge of optimizing multiple prompts in a deep architecture. The paper provides experimental results on various datasets, demonstrating the effectiveness of DLNs compared to single-layer models. The results show that DLNs can achieve performance comparable to higher-capacity models like GPT-4, even when each LLM in the network is smaller and less powerful. It discusses practical aspects of DLN implementation, such as proposal diversity, learning in-context learning, and backtracking and memory optimization. These considerations make the DLN approach more feasible and effective in real-world scenarios. The strengths of the paper lie in its innovative approach, theoretical framework, practical insights, and experimental validation, showcasing the potential of Deep Language Networks for improving language modeling tasks.
Weaknesses: The proposed Deep Language Networks (DLNs) introduce a more complex architecture compared to single-layer models. This complexity may make the implementation and training of DLNs more challenging, requiring significant computational resources and expertise. Although the paper compares DLNs to single-layer models, it does not provide a comprehensive comparison to other state-of-the-art language models or architectures. Without such comparisons, it is difficult to assess the relative performance and advantages of DLNs against other advanced models. Additional evaluation metrics would provide a more comprehensive assessment of DLNs' strengths and weaknesses. The experiments and results presented in the paper may not generalize to all types of language modeling tasks and datasets. The performance of DLNs could vary depending on the specific domain, language, or task requirements. The paper should provide a detailed analysis of the limitations and potential failure cases of DLNs in different scenarios. DLNs consist of multiple layers of Large Language Models (LLMs), which can increase the computational overhead during training and inference. It does not extensively discuss the computational efficiency of DLNs, including the training time, memory requirements, and inference latency. These factors could limit the scalability and applicability of DLNs in resource-constrained environments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide more details on the computational and memory requirements of the proposed Deep Language Networks (DLNs) compared to single-layer models?
Are there any specific challenges or limitations associated with implementing and training DLNs that were not mentioned in the paper?
Could you include a more comprehensive comparison of DLNs against other state-of-the-art language models or architectures? This would help understand the relative performance and advantages of DLNs in a broader context.
Are there any plans to explore alternative metrics in future work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Yes, the content you provided appears to adequately address the requirement of addressing the limitations and potential negative social impact of the authors' work. The authors acknowledge the limitations of addressing societal issues through technical work and emphasize that their modular approach aims to alleviate some of the issues associated with large language models (LLMs), such as concentration of power and difficulty in training. They also express the hope that their approach will make LLMs more adaptable and suitable for a wider range of use cases. However, the authors explicitly state that they do not address the deployment of such models, when and how they should be used, or provide additional guarantees against their misuse. They highlight the importance of considering the performance of these models in uncontrolled environments and the need for justification before deploying them in high-stakes situations. By acknowledging these limitations and potential negative societal impacts, the authors demonstrate transparency and responsibility. They recognize the need for further consideration of ethical and societal implications beyond the technical performance of their work. The content provided covers the requirement and fulfills the guidelines for addressing limitations and broader societal impacts as specified in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review! We are glad that you appreciated our work.
---
***“comparison to other state-of-the-art language models or architectures”***
We provide additional experimental results as listed in the general response. Please refer to ***Table A/B/C/D*** in the general response for these comparisons.
---
***“analysis of the limitations and potential failure cases of DLNs in different scenarios”***
We will discuss limitations of DLNs more in depth in the camera ready. Notably, one of the biggest limitation is that the base LLM cannot be *too small*. This can be seen in Logic.7 for example, which is the hardest task in our benchmark, where every layer must do a notable amount of computation. So there is a tradeoff between capacity of the base LLM and depth of the DLN that is just not discussed enough in the paper right now and that we will address in the camera ready.
---
***“Computational efficiency, including the training time, memory requirements, and inference latency”***
Memory: We emphasize that DLN does not have neural net components per se, so there is no GPU needed on a local machine. We require access to a endpoint that can be either local or remote. If the endpoint is local, this needs to be powered by local GPU resources of course.
Training time / inference latency: we are strongly dependent on traffic of OpenAI API. Our DLN-2 results roughly take 1 hour to execute (when without too much traffic).
We are working to relax DLN's dependence from online APIs, specifically, we are looking into applying DLN on open sourced LLMs, such as WizardLM. Please refer to ***Table D*** in the general response for some preliminary results.
---
***“Alternative metrics”***
To our understanding, the reviewer refers to the scoring functions DLN uses to obtain training signals. Indeed, assuming access to output log probabilities somewhat makes DLN more difficult to use blackbox LLMs (e.g., GPT-4) as backbone. We agree alternative training signals could be a useful future direction. In fact, we are actively exploring alternative ways to help DLNs learn. In our additional experimental results (***Table C***), we include a variant of DLN-1 using GPT-4 as backbone. Specifically, we use accuracy as the final scoring function in that setting. Devising a scoring function for the hidden layer in the case where the log-probabilities are not available (i.e. for GPT-4) is a future direction.
---
Rebuttal Comment 1.1:
Title: Looking forward to
Comment: Thanks for the clarifications, looking forwarding the seeing your updates. | Summary: This paper suggests stacking multiple large language models (LLMs) together, with tunable parameters represented as prompts at each layer. Given that these prompts are discrete natural language elements, their direct optimization using a gradient-based method is challenging. To address this, the authors propose a prompt optimization framework: (1) generating N local candidates using a proposal distribution, and (2) scoring each candidate to select the one that maximizes the scoring function. To apply this framework to a 2-layer setting, they propose a variational inference objective introducing an additional hidden proposal. The effectiveness of this method, tested on multiple datasets, is demonstrated in both 1-network and 2-network settings.
Strengths: The proposed prompt optimization framework is both general and plausible, with its extension to the 2-layer setting using variational inference technically sound. The exploration of stacking multiple LLMs together is an intriguing and under-researched area. The paper's comprehensive experimentation on various datasets effectively showcases the method's applicability in both 1-layer and 2-layer settings.
Weaknesses: The paper's title, "Deep Language Networks", is misleading, as the main content only employs 2-layer LLMs. The term 'deep' should be avoided to prevent any misconceptions and exaggeration.
While the introduction claims that existing modular approaches are heavily dependent on prompt engineering to break a task into smaller tasks, the proposed method also relies on an initial given prompt. This initial prompt prescribes the subtask for each layer. For instance, in the 2-layer setting experiments, the first layer always serves as the COT step, and the second layer provides the final answer. The roles of the two layers are determined by the initial human-provided prompt.
Though the paper suggests that it's easy to generalize from a 2-layer network to multiple layers, it's not straightforward to establish the initial prompts for each layer. While it's manageable in a 2-layer setting, mirroring the COT steps where the first layer handles reasoning and the second layer provides the final answer, it becomes substantially more challenging as the layer size increases. Therefore, the main motivation of this paper may not be as practically applicable as suggested.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: In the context of variational inference, is the LLM that parameterizes q(h) separate?
A straightforward baseline would be COT+APE. How does the performance of this simple baseline compare?
Stacking multiple LLMs increases computational complexity. An alternative is to directly use an ensemble of multiple LLMs. How does the proposed method compare with a simple ensemble of multiple LLMs? This could offer insights into whether LLMs should be combined horizontally or vertically.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please refer to the Weaknesses section for a detailed discussion of the paper's limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
***“The paper's title, "Deep Language Networks", is misleading given that it’s only 2 layers”***
Our title expressed the conceptual framework that originated the idea of the variational inference prompt optimization algorithm, which is our main contribution. While we acknowledge that our models are not “deep”, the algorithm can be readily extended to multiple layers, at least theoretically, as you can find in the Appendix. If the reviewer thinks it’s useful, we could either remove the mention of “Deep Language Networks” in the title and write “Two-Layer Language Networks”, or specify “Towards Deep Language Networks..”.
---
***“While the paper argues that modular methods are heavily dependent on prompt engineering to break a task into smaller tasks, the proposed method also relies on an initial given prompt.”***
We agree! We rewrote our introduction and pointed out that our method is also relying on initial prompts and thus can be seen as a step towards **“integrating learnable components in a human-designed pipeline of prompts”**: in fact, we see our algorithm as a principled way to fine-tune initial prompts that were originally human generated.
---
***“The paper claims that it's easy to generalize from a 2-layer network to multiple layers”***
We apologize for the confusion: we did not mean to claim that it is practically easy to optimize more than 2 layers: indeed, we are open in discussing the optimization difficulties encountered in the two layer case. The algorithm is readily extendable to multiple layers as we point out in the Appendix. We hope that our paper can be considered as a starting point from which we can think about training more than two layers in a principled way. We have rewritten our intro to make sure the training challenges with longer chains are clear.
---
***“It's not straightforward to establish the initial prompts for each layer.”***
We added a discussion about this point in the paper. Engineering prompts is arguably easier than choosing initializations of random neural network weights: this has long remained a difficult question. When working with commercial applications and engineering libraries such as, e.g. LangChain, designers manually craft pipelines of prompts. One potential straightforward application of our prompt optimization algorithm is to fine-tune an already carefully human engineered chain of prompts given some task data.
---
***“COT+APE / Ensemble of multiple LLMs”***
Thank you for suggesting the baselines. For CoT+APE, we followed the procedure described in Section 4.3 of [Zhou et al., 2023], that is we use APE to find a prompt starting with “Let’s” that maximizes the likelihood of correct chain-of-thought reasoning steps. We ran that baseline on the four BBH datasets and observed lower performance except for the date dataset. Looking at the generated prompts from CoT+APE, they are much simpler than what we can obtain with DLN. For instance, on hyperbaton: “Let’s review the correct adjective order in English” vs the example shown in appendix D. Additionally, we also report the scores obtained by a system ensembling three DLN-1, we train the three DLN-1 individually and ensemble by majority voting during test. These results can be found in ***Table A/B*** in the shared response.
---
***“In the context of variational inference, is the LLM that parameterizes q(h) separate?”***
We use the same LLM (davinci-003) but in principle it can be separate, q(h) could either be a bigger LLM or a smaller LLM. Understanding the implication of this is definitely an interesting direction for future work.
---
Rebuttal Comment 1.1:
Title: Any further questions ?
Comment: Dear SaT7,
Do you have any further clarifications you need from the authors ? Thanks. | Summary: The authors provide an interpretation of LLMs as shallow language networks. They explained how One-layer language networks can be used for joint prompt training, and then moved to stacked (two) language networks. The authors propose to use variational inference for the training, and figured out a few practical instantiation that makes the optimization work well.
Strengths: Viewing LLMs as stochastic layers and prompts as tunable parameters, the authors provide a framework for joint prompt training of stacked LLMs using variational inference. The overall idea looks sound, and the framework and variational inference solution look elegant. The authors also experimentally verify their Deep Language Network on a few artificial datasets, and achieve comparable performance. The ablation study aligns with the authors' claim.
Weaknesses: -- As also claimed by the authors, it is difficult to make Eq1 (the bound) tight, and the optimization is also not straight-forward. In order to make the whole system work, a lot of practical instantiation is needed, making the whole system more cumbersome. In order to get diverse prompts and identify the near-optimal prompts, I expect the value of K and N in algorithm 2 will not be too small. An ablation study looking into that, to study the computational cost and performance effect given different K and N could be interesting.
-- Also, this work only evaluates on small scale datasets, and works on some artificial tasks. This may prevent the audience from understanding the benefit of the proposed methods. Is it because the training/evaluation is not so efficient, so the authors decided to work on a small scale?
-- I wonder if the proposed methods can be easily extended to three layers or beyond. Technically, it seems possible, but the optimization could be very challenging.
-- In line 186, the authors mentioned that they selected lambda for each task. This could be cumbersome when working on a new task/dataset.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can the authors explain the results in Table 2 with richer information. Why does the approach seem to work very well on Nav and Subj, but worked clearly worse than other strong baselines? The 2-layer DLN end-to-end worked badly presumably due to the difficulty of optimization mentioned in section 4?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors present some bullet-points on potential limitations in the draft, I did not identify significant negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and insights.
---
***“An ablation study to study the computational cost and performance effect given different K and N could be interesting.”***
An ablation study would be absolutely interesting and we will add this to the camera ready. Empirically, we find that setting the number of prompt samples `N` 10/20 is enough. We found this number by optimizing dev. Hyperbaton performance. We suspect that more prompts won’t help due to diversity issues in the prompt proposal. Augmenting the hidden samples `K` is costly especially if every hidden layer generation gets long. We found `K` ~5/10 to work well across tasks. We found this number by optimizing Navigate performance.
---
***“Can the authors explain the results in Table 2 with richer information. Why does the approach seem to work very well on Nav and Subj, but worked clearly worse than other strong baselines? The 2-layer DLN end-to-end worked badly presumably due to the difficulty of optimization mentioned in section 4?”***
Thank you for noticing this. For Date, we noticed a dataset problem, where the dev set is very small. Sometimes DLN-2 would get 90% on dev and 40% on test. For Logic.7, we think that the base model (dv003) is simply not strong enough to perform such reasoning and therefore we would need a "deeper" DLN where each step does a simpler reasoning step. Note that Logic.7 is the hardest of the logic datasets in BBH.
---
***“I wonder if the proposed methods can be easily extended to three layers or beyond. Technically, it seems possible, but the optimization could be very challenging.”***
We agree. It can be very challenging. Notably, one the most challenging things is ensuring enough diversity in the prompt proposals. However, we think that in industrial applications, the optimization won’t start de novo, but from a set of carefully human designed prompts. In that setting, DLN can be considered as a way to fine-tune this well-thought initialization, in a principled way. Future work should be concerned in finding such scenarios which are pervasive in industry but less so in academia.
---
Rebuttal Comment 1.1:
Comment: Thanks for the information, and I agree that the planned changes would make the paper stronger, and I’m looking forwarding a later version. I would keep the score as it is for now. | Summary: This paper introduces a novel discrete prompt tuning method. The paper is discussed under a scenario that regarding discrete prompt as the only tunable parameters while freezing all other model parameters. In this sense, the author focuses on getting optimized prompt from one LLM and feeding which as the input prompt to next LLM. Such kind of stacking LLMs workflow is regarded as the main contribution as the authors claimed. Moreover, the authors introduce a new latent variable to further improve this pipeline.
In the experiment, the authors verify the effectiveness of the proposed method via comparing with a bunch of baselines including several in-context learning method and one instruction-tuning method APE. The authors highlight the outperformance of their method over 0-shot GPT-4 is achieved via using text-davinci-003 backbone on Hyper, Tree, Disaster dataset.
Strengths: originality: This paper shows a novel discrete engineering method from a new perspective that interprets the process of getting optimal prompts as the stack of two LLMs. This angle is novel and interesting.
From the methodology perspective, authors develop their method based on improved APE and generalize which to 2-layer case.
quality: overall quality is mediocre, though the idea is novel, the authors do not provide enough evidence to support their claim.
clarity: the authors attach the codes for 1-layer & 2-layer case, which makes it easier to interpret their ideas. However, the intuition for some settings need to be improved.
significance: authors evaluate the proposed 1-layer method on 9 tasks and 2-layer method on 4. They re-run the experiment 3 times for verify the significance.
Weaknesses: 1. The paper is not comparing with enough correct baselines. The authors are mainly comparing with 0-shot GPT3&4 and APE method in two main tables. However, if you regard the number of training example that proposed method can touch with as the x-shot, then the comparison objective should be x-shot GPT3&4 instead of 0-shot GPT3&4 (especially in Table 2). It's widely acknowledged that increasing the number of few-shot examples (in a range) would enhance model's performance. Is it still fair to compare with 0-shot model with your proposed method? Then it means, the only baseline authors are correctly comparing with is APE. I'd encourage authors to compare more baselines in both discrete prompt engineering and continual-space prompt tuning area.
2. The generalization of the proposed method is questionable. The author claims their proposed method can generalize to more number of layers. But the comparison between 2-layer and 1-layer model is insufficient. Authors only compare them on 4 datasets and on some of them the 1-layer results are even better than 2-layer's. It's not convincing.
3. Even the authors have mentioned in conclusion&future work that testing with other LM is needed. I want to highlight this as a major weakness of the paper. The value of this paper should be reevaluated is it only works on GPT-3 model. Why not stacking two GPT-4 models which should not be hard to do (just change slightly in API calling)? Or maybe on BERT/RoBERTa models. Without corresponding evidence, it's hard to justify the effectiveness of the proposed method.
4. The intuition and some setting of the proposed method is unclear. For example, in Line 137, the authors suddenly define the a weight without illustrating why the sum of these two terms is a good fit for their purpose. I am little confused to interpret.
5. The authors also mentioned that they hope to match the performance of the largest LLMs without incurring the large computational and data cost. However, one should also be careful that GPT-3 API calling is also a considerable computational cost. Without disclosing the number of times they call the GPT-3 API, it's hard to support their claim as incurring low computational cost.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: 1. I kind of curious why you are adopting a subset of test set for evaluation. Any reason for that?
2. I kind of doubt the performance of 0-shot GPT-4. For example, on Subj dataset, FT RoBERTa and prompt tuning RoBERTa can achieve ~97% score, while 0-shot GPT-4 can only get 65.8. I am little shocked by GPT-4's bad performance on this task. It would be better for the authors to release the prompt of 0-shot gpt-4 they use for reproducing purpose.
3. Why not reporting the 95% confidence interval on APR & 1-Layer LN & 2-Layer LN in Table 2?
4. Why you split the main results into two tables?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 1 poor
Limitations: typo:
Algorithm 2 Line 13 \beta_k -> \beta_1
Algorithm 2 Line 18 \pi_0^n -> \pi_0^I
Algorithm 2 Line 19 \pi_1^n -> \pi_1^I
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
Please find hereafter our answers:
---
***“1. The paper is not comparing with correct enough baselines. [...] it means, the only baseline authors are correctly comparing with is APE.”***
In the paper, in addition to APE, we compared to other non 0-shot baselines: ICL (5-shot) uses 5 examples from the training set, so we have one additional baseline which is *not* zero-shot. KATE retrieves from 400 examples, therefore the same number as we use for optimization, and uses 5 of those to perform few-shot learning.
For completeness, we performed additional experiments with 10-shots and 32-shots on the tasks we compared with DLN-2 and the results are as follows. Please, take a look at the shared response (***Table A***) for these results.
As reported in the above table, for some tasks (e.g., Logic 7), the length of extra in-context demonstrations could quickly use up the context length limit (4096 in our case of text-davinci-003). This suggests an advantage of prompt optimization in comparison to in-context learning, despite both methods leverage information from the training set.
---
***“2. Scale beyond two layers; two layers is not always better than one layer”***
Scaling beyond two layers might be a hard problem. We think that prompt optimization can benefit from human engineered initializations, which can make optimization easier. We hope that our algorithm can serve as a starting point to fine-tune human crafted chains of prompts.
Some tasks might not benefit from CoT or reasoning paths. For this paper, we ultimately consider the 1 vs 2 layers as a model selection problem for each task.
---
***“3. Testing with other LM is needed. I want to highlight this as a major weakness of the paper.”***
We tested our prompt optimization algorithm both with GPT-4 and an open source model WizardLM. We cannot unfortunately stack two GPT-4 models, as we need to have access to the log probabilities, which are not available at this time. We couldn’t run 2 layers on the open source model given that we require “echo” functionality of the log_probs and VLLM (the service we use for inference) doesn’t have it yet. This requires more time. We will report the numbers in the camera ready.
We report mean accuracy across 3 seeds and cost for inference over the test set as the number of processed tokens (less is better). These results were obtained using the latest GPT-4 openai endpoint. These results can be found in ***Table C*** and ***Table D*** the shared response.
---
***“5. the authors suddenly define the a weight without illustrating why the sum of these two terms is a good fit for their purpose.”***
The sum of these two terms corresponds to weighting each sampled hidden state by their probability under the posterior distribution. This can be viewed as a biased but more robust importance weighting scheme where the prior distribution is assumed to be uniform. We will make this clearer in revision.
---
***“6. Without disclosing the number of times they call the GPT-3 API, it's hard to support their claim as incurring low computational cost.”***
For reference, the costs for calling GPT-3 are in the appendix. While DLN-2 requires twice as much compute as a single language layer using the same base LLM, our hope is that each language layer in a DLN-2 could be much smaller than the language layer in DLN-1 for the same performance, thus limiting or even negating the additional inference cost of using multiple layers.
---
***“Why you are adopting a subset of test set”***
Adopting a subset of the test set is also a common practice shared by other works due to computational costs associated with running large LMs. For example, this paper also uses a subset of the test set for evaluation: https://arxiv.org/abs/2104.08786
---
***“Doubt the performance of 0-shot GPT-4... FT RoBERTa and prompt tuning RoBERTa can achieve ~97% score”***
For the 0-shot prompt, we use the instruction in the Appendix. FT and prompt tuning use gradient-based optimization and thus achieve a higher score. The following paper shows the superiority of gradient-based optimization wrt 0-shot and in-context learning for NLP tasks https://arxiv.org/abs/2205.05638. Our new results use the latest GPT-4 version available on the API, the score improved to 74.4% with this new model but still significantly far from ~97% score that you suggest.
---
***“Typo in Algorithm 2”***
We fixed the typo in Algorithm 2, line 17-18 $n$ should be $N$, thanks for noticing!
---
Rebuttal Comment 1.1:
Title: further improvements are necessary for this paper
Comment: Thank you for your reply. Here are a few thoughts I'd like to share regarding your responses.
1. **"Insufficient comparison with correct baselines"**
Table A indeed reveals the substantial benefit of incorporating more examples, resulting in a notable improvement of approximately +10 for both the ICL and KATE models in navigation and subjectivity tasks. As previously pointed out, it's worth contemplating whether this comparison is equitable, given that you extract information from 400 examples while the baselines have access to only 5/10/32 examples. To promote thorough analysis, I recommend conducting an ablation study where your models are trained with the same number of examples as the baselines.
Additionally, I reiterate my belief that the current version lacks an adequate comparison with a comprehensive range of baselines. Expanding the scope of baselines for comparison could enhance the robustness of your findings.
2. **"Scale beyond two layers; two layers is not always better than one layer"**
You've highlighted a significant discrepancy in your response. While you explained that "scaling beyond two layers might be a challenging issue," your submission predominantly emphasizes the ability of your model to incorporate numerous LLMs and attains superior performance compared to a single layer (as evidenced between Line 3-5). This might be perceived as an overstatement. Notably, your original submission omits any mention of the unfavorable generalization to additional layers.
3. **"Testing with other LM is needed. I want to highlight this as a major weakness of the paper."**
You've made a valid point regarding the inclusion of GPT-4 and the open-source model WizardLM, which is a positive addition. However, I find it difficult to concur with the methodology of computing solely based on the number of tokens during inference. Indeed, your model necessitates training, which involves multiple API calls, unlike ICL which doesn't require training. To ensure a fair comparison, wouldn't it be more appropriate to report the number of tokens involved in both stages? Furthermore, I'd like to emphasize the importance of presenting comprehensive results across a broader range of datasets, rather than solely focusing on the four datasets mentioned earlier.
**"Why you are adopting a subset of test set"**
Utilizing a subset of the test set for evaluation becomes more reasonable when evaluating the entire test set is resource-intensive or when baseline methods themselves utilize such an approach. However, in your case, neither of these circumstances seem to apply. Without aiming to be overly critical, I'd like to point out that the paper you referred to has two distinctions from your submission: firstly, it lacks a direct relationship to your work, and secondly, it employs a subset of the development set rather than the test set.
I still not get the response for my questions **"Why not reporting the 95% confidence interval on APR & 1-Layer LN & 2-Layer LN in Table 2?"** and **"Why you split the main results into two tables?"**.
Considering the current quality of the submission and the need to incorporate more comprehensive experiments, I maintain my stance that further improvements are necessary for this paper.
---
Reply to Comment 1.1.1:
Title: clarifications on your responses
Comment: Thank you for the answer. Before replying, we want to emphasize that these models are expensive and we need to be mindful when selecting which experiments to run. We would love it if you could provide a bit more information about what experiments you would like us to prioritize, which conclusion you would get from the additional results, and how that would affect your assessment of our work. While waiting for an answer, we still decided to run as many of the requested experiments as we could, hoping that this can address your concerns.
---
***As previously pointed out, it's worth contemplating whether this comparison is equitable, given that you extract information from 400 examples while the baselines have access to only 5/10/32 examples.***
KATE consists of a retrieval step and a prediction step. The retrieval step is executed using *400* examples. Due to the limit of the context length in davinci-003, the prediction step can take at most 32 examples. Therefore, the KATE baseline we report uses all of 400 examples.
We would be happy to implement other baselines you think are more equitable.
---
***the current version lacks an adequate comparison with a comprehensive range of baselines. Expanding the scope of baselines for comparison could enhance the robustness of your findings.***
We would be happy to implement other baselines you suggest that can enhance the robustness of our work.
---
***To promote thorough analysis, I recommend conducting an ablation study where your models are trained with the same number of examples as the baselines.***
We currently ran DLN-1 on the 4 tasks from the rebuttal and updated our results:
| | nav / #tok | date / #tok | logic 7 / #tok | subj / #tok |
| - | - | - | - | - |
| ICL - 10-shot | 61.3 / 151k | 62.9 / 263k | 38.9 / 529k | 72.0 / 119k |
| ICL - 32-shot | 66.0 / 449k | 63.5 / 786k | exceeds ctx len | 83.2 / 343k |
| DLN-1 - 32-shot | 69.1 / 26k | 55.0 / 136k | 42.1 / 77k | 83.1 / 55k |
---
***While you explained that "scaling beyond two layers might be a challenging issue," ... your original submission omits any mention of the unfavorable generalization to additional layers.***
We do not think that “scaling beyond two layers might be challenging” means “unfavorable generalization to additional layers”. Instead, while it might be hard to train multiple layers, very much like it was hard for neural networks at the beginning, the potential gains could also be very real if one manages to do so. Our experiments with DLN-2 show that this could be the case. As mentioned in the general response, we will tone down the abstract and intro in the camera-ready.
---
***However, I find it difficult to concur with the methodology of computing solely based on the number of tokens during inference.***
Please refer to the Appendix where we report the cost of training DLN-2. It costs 60 dollars to train a dln-1 and 273 dollars to train a dln-2. We therefore believe such amounts during training are thus negligible when serving the model for a large number of users: tokens spent per user request at test time is more important in this context, as this is a cost you can't amortize.
---
***Furthermore, I'd like to emphasize the importance of presenting comprehensive results across a broader range of datasets, rather than solely focusing on the four datasets mentioned earlier.***
We restricted ourselves to the datasets reported because every run is fairly expensive. Please suggest datasets you find particularly useful to strengthen our work.
---
***“it employs a subset of the development set rather than the test set.***
The reported results in Lu et al. are computed on a 256 subsample of the dev set. Please refer to Table 2 caption – first line – and their github repo: https://github.com/yaolu/Ordered-Prompt/blob/e93f70d0a5f6a8cfcadf6f2917c26eed265cd0be/config/cb.yaml#L2 The reason the authors give is for computational efficiency.
---
***"Why you split the main results into two tables?".***
Because running DLN-2 is more expensive and we thought it would be sufficient to do it on 4 datasets, and we feel these 4 datasets are the most representative of the range of behaviours we want to analyze.
---
***"Why not reporting the 95% confidence interval on APR & 1-Layer LN & 2-Layer LN in Table 2?"***
| | **Nav** | **Date | **Logic 7** | **Subj** |
|:-----------------------:|:--------------:|:---------------:|:--------------:|:--------------:|
| **0-shot** | 64.1 | 56.4 | 45.9 | 61.7 |
| **APE-15** | 67.3 $\pm$ 7.7 | 32.1 $\pm$ 28.6 | 45.5 $\pm$ 4.7 | 61.3 $\pm$ 7.2 |
| **APE-400** | 56.9 $\pm$ 32.9| 23.5 $\pm$ 14.1 | 45.6 $\pm$ 12.4| 63.7 $\pm$ 9.2 |
| **DLN-1** | 67.1 $\pm$ 7.6 | 55.7 $\pm$ 4.5 | 47.5 $\pm$ 2.1 | 83.2 $\pm$ 6.5 |
| **DLN-2 (end 2 end)** | 83.1 $\pm$ 24.7| 65.9 $\pm$ 4.0 | 45.7 $\pm$ 3.5 | 85.9 $\pm$ 8.7 | | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments. In addition to the replies to the specific questions raised by each reviewer, we want to address points that were raised multiple times.
---
***About the discrepancy between the ambition of our proposed framework and its technical instantiation***
While the framework can technically accommodate more than two layers, we acknowledge that we limit its instantiation to 1 and 2-layer networks. Some of the barriers to extending this framework to more general architectures, like the quality of the variational inference, will likely be challenging. We will rephrase our claims and be more explicit about the difference between the theoretical approach and what has currently been tested. While we believe this framework is both conceptually interesting and has already delivered some interesting results, we do not want to downplay the challenges to overcome to instantiate it more generally, nor the limitations of our current analysis, mainly due to the large computational costs of working with models of this scale.
---
***Regarding the lack of comparison with more models and baselines***
In addition to reemphasizing that we did compare with several prompt engineering and few-shot techniques in the original submission, we added comparisons to more models, also using different LLMs as language layers, as allowed by our framework. Particularly we added results for:
- APE+CoT
- ICL (in-context learning) and KATE with more in-context examples
- DLN-1 ensemble
- DLN-1 on top of GPT-4
- DLN-1 for open-source models such as [WizardLM-13b-v1.2](https://github.com/nlpxucan/WizardLM), an open sourced LLM developed based on Llama-2.
We cannot run DLN-2 on top of GPT-4 due to the lack of available log-probabilities. We couldn’t compute DLN-2 for open-source models in time for the rebuttal, but we will report the results in the camera ready version. We also report the cost (lower is better) in terms of total number of tokens for the test set (prompts included, #tok). We emphasize the cost at the testing time because it is more relevant in real-world deployment and the training cost is one-off. Note that: 1/ We used the newest GPT-4 version (0613) in Table C; 2/ we re-optimized DLN-2 E2E hyper-params on the datasets tested and got improvements, especially for navigate.
Here are all the results:
---
| ***Table A*** | nav | date | logic 7 | subj |
| - | - | - | - | - |
| ICL - 5-shot | 56.5 | 62.1 | 36.7 | 76.4 |
| ICL - 10-shot | 61.3 | 62.9 | 38.9 | 72.0 |
| ICL - 32-shot | 66.0 | 63.5 | exceeds ctx len | 83.2 |
| KATE - 5-shot | 56.9 | 61.1 | 44.4 | 71.1 |
| KATE - 10-shot | 59.5 | 62.0 | 41.6 | 73.9 |
| KATE - 32-shot | 67.5 | 62.8 | exceeds ctx len | 80.4 |
| DLN-1 | 67.1 | 55.7 | 47.5 | 83.2 |
| DLN-2 | 82.9 | 63.9 | 45.2 | 85.9 |
| DLN-1 Ensemble-3-seeds | 65.6 | 51.2 | 46.8 | 83.2 |
***TLDR***: For Text-Davinci-003, increasing the number of ICL examples helps performance, but cannot match DLN-1, DLN-2 performance. More in-context examples could quickly use up the context length limit.
---
| ***Table B*** | hyper | nav | date | logic 7 |
| - | - | - | - | - |
| APE-15 | 68.5 ± 5.5 | 67.3 ± 7.7 | 32.1 ± 28.6 | 45.5 ± 4.7 |
| CoT+APE | 50.9 ± 0.8 | 61.5 ± 1.5 | 58.6 ± 2.2 | 38.9 ± 1.6 |
| DLN-1 | 91.9 ± 3.0 | 67.1 ± 7.6 | 55.7 ± 4.5 | 47.5 ± 2.1 |
| DLN-2 | - | 82.9 | 63.9 | 45.2 |
***TLDR***: DLN-1 and DLN-2 outperform CoT+APE.
---
| ***Table C*** | nav / #eval tok | logic 7 / #tok | subj / #tok |
| - | - | - | - |
| 0-shot GPT-4 | 73.8 / 16k | 62.5 / 50k | 74.4 / 13k |
| ICL-5-shot GPT-4| 72.6 / 83k | 60.0 / 291k | 90.4 / 66k |
| ICL-16-shot GPT-4| 77.1 / 233k | 62.2 / 822k | 93.8 / 181k |
| DLN-1 GPT-4| 78.5 / 86k | 64.0 / 80k | 90.6 / 57k |
***TLDR***: DLN-1 improves over ICL on 2 out of 3 tasks on GPT-4 at a comparable token cost. Some tasks do not benefit from ICL (i.e. reasoning tasks) while other tasks like SUBJ benefit significantly. In reasoning tasks, DLN-1 outperforms ICL even for GPT-4 by learning strategies to solve the problem. We combined DLN-1 and 5-shot ICL for SUBJ and matched 16-shot performance at a lower token cost, i.e. DLN-1 GPT-4 + 5-shot: 93.8 / 116k
---
| ***Table D*** | nav / #eval tok | logic 7 / #tok | subj / #tok |
| - | - | - | - |
| 0-shot WizardLM-v1.2 13B | 58.0 / 32k | 0.0 / 54k | 65.8 / 14k |
| ICL-5-shot WizardLM-v1.2 13B| 56.0 / 240k | 28.0 / 318k | 50.8 / 74k |
| DLN-1 WizardLM-v1.2 13B | 61.1 / 48k | 31.0 / 69k | 79.8 / 17k |
***TLDR***: Open models seem significantly less able to capture few-shot examples in the context. DLN-1 outperforms ICL on all tasks here. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Simplifying Neural Network Training Under Class Imbalance | Accept (poster) | Summary: The authors show that simply tuning standard hyperparameters provides state-of-the-art performance on a wide variety of class-imbalanced datasets, which may be surprising and give an impact to the community: We have to re-think the experimental settings for performance evaluation on imbalanced datasets.
The authors show that simply tuning existing components of DNNs, such as the batch size, data augmentation, architecture size, pre-training, optimizer, and label smoothing, can achieve state-of-the-art performance without any specialized loss functions or samplers.
Specifically, (1) imbalanced data prefers small batch sizes, (2) the data augmentation strategies that achieve the best performance on balanced datasets yield inferior performance on imbalanced datasets, (3) large models that do not overfit to balanced datasets strongly overfit to imbalanced datasets of the same size, (4) adding an self-supervised loss during training can improve the performance on imbalanced datasets, (5) the SAM optimizer improves minority accuracy, and (6) label smoothing prevents the overfitting to minority classes.
Thorough experiments are performed on six image datasets, including natural image, medical, and remote sensing datasets as well as two tabular datasets. The models are a variety of CNNs, XGboost, and SVM. The authors run five seeds for each evaluation and report the mean and one standard error. The performance improvement compared with state-of-the-art (e.g., [Zhou+, CVPR2023, http://home.ustc.edu.cn/~zzp1994/2023068462.pdf]) is significant (especially on CIFAR-10 and -100).
The authors conclude that, even though they attain the state-of-the-art, existing methods designed for web-scraped natural image classification benchmarks do not always yield improvements on other real-world problems. If we are to reliably compare methods for class imbalance, we need a more diverse benchmark suite, i.e., there may not a universally good model for imbalanced datasets.
The paper is well-written and easy to follow. I enjoyed reading the paper. Contribution is clear. Experimental settings are well-explained. Distinctions from previous works are clearly stated. Overall, the contribution and impact of the paper is significant compared with other papers that have been published in top conferences.
Strengths: - The authors show that simply tuning standard hyperparameters provides state-of-the-art performance on a wide variety of class-imbalanced datasets, which may be surprising and give an impact to the community: We have to re-think the experimental settings for performance evaluation on imbalanced datasets.
- Experimental results are convincing. Thorough experiments are performed on six image datasets, including natural image, medical, and remote sensing datasets as well as two tabular datasets, which are sufficient compared with other contemporary papers published top conferences. The models are a variety of CNNs, XGboost, and SVM. The authors run five seeds for each evaluation and report the mean and one standard error. The performance improvement compared with state-of-the-art (e.g., [Zhou+, CVPR2023, http://home.ustc.edu.cn/~zzp1994/2023068462.pdf]) is significant (especially on CIFAR-10 and -100).
- The paper is well-written and easy to follow. Contribution is clear. Experimental settings are well-explained. Distinctions from previous works are clearly stated. Overall, the contribution and impact of the paper is significant compared with other papers that have been published in top conferences.
- There are some papers that claims that supervised contrastive learning is effective for imbalanced learning, e.g., [Wang+, CVPR2021, https://arxiv.org/abs/2103.14267]. There is also a seminal paper that analyze self-supervised learning and class-imbalance [44], in which Liu+ claim that pre-trained self-supervised representations are more robust to imbalance than supervised representations. To my understanding, the authors' approach is somewhat similar to (but not exactly the same as) these two, i.e., adding self-supervised loss to supervised training and mitigating the pre-training phase (Joint-SSL). The authors show that this approach is effective on imbalanced datasets, which is a consistent result with the two references above, although Joint-SSL only is not always helpful or the improvement is marginal (Tables 1 & 2).
Weaknesses: - In this paper, hyperparameter tuning is shown to be important for performance on imbalanced datasets. THEREFORE, I would like to see ALL the hyperparameter settings and tuning method of ALL the experiment. I would like to see the code to reproduce the experimental results.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - [Question] Could you discuss the differences of underlying mechanism of Joint-SSL and supervised contrastive learning, e.g., [Wang+, CVPR2021, https://arxiv.org/abs/2103.14267]? (I know the difference of the loss functions between the two.) This is a bit abstract question, but I would appreciate it if you could answer this.
- [Question] It is shown that SAM and its variant are effective on imbalanced datasets in [Azzuni+, ISBIC2022, https://ieeexplore.ieee.org/document/9854725] and [Zhou+, CVPR'23, http://home.ustc.edu.cn/~zzp1994/2023068462.pdf].
Could you briefly discuss the relationships between the present paper and these two?
- [Question] In [Shen+, ICLR2021, https://openreview.net/pdf?id=PObuuGVrGaZ], they discuss in Section 7 that label smoothing is ineffective when the dataset is imbalanced in the context of knowledge distillation. Could you discuss the relationships between this result and 6. in Introduction of the current paper?
- [Comment] On the other hand, label distribution smoothing in imbalanced regression problems is shown to be effective in [Yang+, ICML2021, http://dir.csail.mit.edu/]. There may be something more in label smoothing and imbalance. Interesting.
- [Question] Could you explain why are smaller batch sizes beneficial for more imbalanced datasets? The authors discuss this point: there is a tradeoff between (1) large batch sizes prevent from forgetting minor classes by including minority samples in almost every batch and (2) large batch sizes lead to overfitting to majority classes because they dominate almost all batches. If so, what is the main factor that controls this tradeoff? This would be a difficult question, and just a guess is welcome.
- [Comment (minor)] (Related to Figure 1) I would like to see more results on different datasets and models other than CIFAR-10 and -100 with ResNet, if possible.
- [Question (minor)] I know the focus of the present paper is CNNs, but are there any experimental results of Transformer-based models? I wonder if there is any differences between CNNs and Transformers.
- [Comment (major)] In this paper, hyperparameter tuning is shown to be important for performance on imbalanced datasets. THEREFORE, I would like to see ALL the hyperparameter settings and tuning method of ALL the experiment. I would like to see the code to reproduce the experimental results. This is also because it seems that several specialized methods are needed to achieve SOTA such as TrivialAugment + CutMix + label smoothing + exponential moving weight average + Joint-SSL + SAM-A (+ M2m) (please correct me if it is wrong). In addition, batch size, choice of data augmentation methods, learning rate, and other hyperparameters are also tuned, I guess.
- [Comment (minor)] Optimizer dependence of the classification performance on imbalanced datasets is also interesting.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: - As stated by the authors, the optimal learning configuration depends on the task, and thus it may be unclear how universal the know-how obtained in this paper is.
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes
Flag For Ethics Review: ['No ethics review needed.'] | Rebuttal 1:
Rebuttal: Dear Reviewer nEkH,
Thank you for your thorough and insightful feedback. We are grateful for your recognition of our work's significance and potential impact. We address each of your points below:
## Hyperparameter Settings and Code Availability:
Regarding hyperparameter tuning, we adopt a standard process as reported in previous works [1]. In our revised working draft, we now provide a more comprehensive appendix detailing all hyperparameter settings. Furthermore, we will make our code publicly available upon publication to ensure full reproducibility of our results.
## Compare to Supervised Contrastive Learning:
In our work, Joint-SSL combines self-supervised and supervised losses, aiming to leverage the benefits of both approaches. Unlike supervised contrastive learning, even if every sample was in the same class, the SSL objective allows us to learn useful features. Another difference lies in the choice of self-supervised loss: In our implementation, we use a non-contrastive objective - VICReg - along with the supervised loss, while supervised contrastive learning uses a contrastive loss. We appreciate your comment and clarify this in our revised version.
## Relationships with SAM:
Thank you for pointing out these papers. We can't find the Azzuni+ paper and would appreciate a pointer. [1] suggest using SAM directly to combat data imbalance without specifying optimizing the perturbation sizes per class. However, [2], whose work was parallel to ours, suggests a more complicated version of SAM requiring a two-stage process. We agree that the first step of their method is similar to ours, and we will update our revised paper accordingly.
## Label Smoothing and Imbalance:
Shen et al. (2021) found that smoothing the same for all classes is ineffective for imbalanced datasets. However, our suggested method applies different smoothing for each class based on their distributions. We apply more smoothing to the minority classes, which need regularization. This is consistent with our other findings that methods increasing regularization and robustness against overfitting of the minority classes are beneficial for imbalanced training.
## Why Smaller Batch Sizes are Beneficial for More Imbalanced Datasets:
Our current understanding is that modern DNNs tend to perform well with large batch sizes during balanced training. This is partly because larger batches inherently provide a form of implicit regularization. They include diverse samples from different classes in each batch, creating heterogeneity in the data that promotes generalization. On the other hand, this heterogeneity is often lacking for imbalanced training. Therefore, with large batches, DNNs may overfit to minority classes. Conversely, smaller batch sizes introduce a form of regularization. Specifically, we found that smaller batch sizes help prevent overfitting minority classes. This acts as a form of regularization in SGD as described by Sekhari et al. [3], who analyzed the role of batch size as a regularizer for the loss landscape, encouraging generalization. This effect introduces noise into gradient estimation and counteracting the tendency to overfit to minority examples.
To delve deeper into this issue, we examine its interaction with our joint-SSL method. While SSL models typically perform better with larger batch sizes for balanced datasets, our study suggests that the conventional wisdom for SSL models may not apply to imbalanced data. To verify this, we trained our joint-SSL algorithm with different batch sizes on CIFAR-10. As seen in the provided page, the impact of batch size is less pronounced, with joint-SSL acting as an alternative regularizer, mitigating the overfitting of minority classes.
We suspect the heterogeneity of samples might control the trade-off between including more samples in each batch and avoiding overfitting. However, we acknowledge that this is a complex topic, requiring further investigation to understand the underlying mechanism fully.
## Additional Results on Different Datasets and Models:
Prompted by your feedback, we have included additional results with the Tiny-Imagnet and iNaturalist, using Swin Transformer v2 and ConvNeXt. The results of these additional experiments consistently align with our previous findings, enhancing our work's robustness and general applicability. Furthermore, these enhancements increase the diversity of our architectural analysis, contributing to a broader understanding of various architectures' behavior under imbalance.
## Experimental Results of Transformer-Based Models:
While our focus in this paper is primarily on CNNs, in response to your comment, we have analyzed the transformer-based model, Swin Transformer v2. As you can see in the attached figure, the resulting behavior is similar to that exhibited in CNN experiments.
## Optimizer Dependence on Classification Performance:
We have now added a more rigorous analysis to our revised working draft, and we’ll include it in our camera ready version. In brief, the differences between SGD and AdamW are small. Generally, while AdamW was better for larger batch sizes and more robust to hyperparameter choices, SGD achieved better results for imbalanced training.
Thank you again for your thoughtful review. We made a significant effort to address your feedback, including multiple experiments and paper edits, and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address.
## References:
[1] Liu et al., 2021, “Self-supervised learning is more robust to dataset imbalance”.
[2] Zhou et al., 2023, “Class-Conditional Sharpness-Aware Minimization for Deep Long-Tailed Recognition”.
[3] Sekhari et al., 2021, "Sgd: The role of implicit regularization, batch-size and multiple-epochs”.
---
Rebuttal Comment 1.1:
Comment: Thank you for the discussions and thorough revisions.
I really appreciate the significant effort.
I carefully read all the comments (and other reviews).
Let me give some more comments below.
> We can't find the Azzuni+ paper and would appreciate a pointer.
I am sorry for the inconvenience.
Azzuni+ is ["Color Space-based HoVer-Net for Nuclei Instance Segmentation and Classification", Azzuni+, IEEE ISBICC 2022, https://arxiv.org/abs/2203.01940].
### Why Smaller Batch Sizes are Beneficial for More Imbalanced Datasets:
Thank you for the detailed reply.
This is also pointed out by Reviewers kUXT & 8NJC, but your explanation is convincing.
My understanding is as follows.
A large batch size leads to small variance of gradients.
The batch is likely to be dominated by major examples.
Thus, the weight tends to converge to a local minima that prefers majority class.
On the other hand, a small batch size induces noise to gradients, which perturbs the gradients and thus they do not always go to the poor local minima.
### Related to Question on Batch Size and SSL [Reviewer kUXT]:
Interestingly, a recent study states that large batch sizses are not always necessary for good performance [1].
Is this relevant to the present paper?
[1] ON THE DUALITY BETWEEN CONTRASTIVE AND NON- CONTRASTIVE SELF-SUPERVISED LEARNING
### I also would like to see results on transformers [also by Reviewer kUXT & 8NJC]:
I think the lack of it is not a critical problem
because major CNN models are included, and thus, the paper provide many insights to CNN users.
Anyways, in Author Response, the authors added extra results on SwinV2 (and ConvNext, additional datasets, etc.),
which addressed the concern.
### Language models, imbalance, and batch size:
Related to future work, [2] analyzes multilingual models and shows that minority language are affected by imbalance. Oversampling is not that effective in this scenario.
The proposed method is shown to be effective even when the batch size is large.
[2] Robust Optimization for Multilingual Translation with Imbalanced Data
### Lesson 6:
I found two relevant papers related to Lesson 6.
[3] uses different hyperparameters for different classes to address the class-imbalance problem.
[4] uses class-dependent temperature for imbalanced learning.
Relations, differences, and discussions should be included in comparison with Lesson 6.
[3] AutoBalance: Optimized Loss Functions for Imbalanced Data
[4] Identifying and compensating for feature deviation in imbalanced deep learning ([61] in the main text).
### Contributions Clarified:
The contributions are clarified more in Author Response to Reviewer 8NJC.
### Some of the findings are valid for balanced setup as well [Reviewer kUXT,]:
This is a comment by Reviewer kUXT,
but it is important to show that the techniques that are effective in balanced setup are effective also in imbalanced setup.
This is because not all techniques work well universally.
In particular, whether a sampling or augmentation method is effective in imbalanced training is not obvious in my opinion;
for instance, when to use oversampling and undersampling is still an open question.
Therefore, this point is not critifcal and I still support the acceptance.
# [Important] Evaluation Metrics [Reviewer D6vk]:
I thank D6vk for pointing this out.
Some of the results are given in classwise accuracy, but some are given in accuracy,
which is notoriously known to prefer major classes (and confusingly, some papers use term "accuracy" for "(averaged) classwise accuracy"!).
Thus, I personally do not recommend you to use accuracy in imbalanced learning.
Instead, I recommend the macro-averaged recall (averaged classwise accuracy), macro-averaged AUC, and macro- and micro-averaged F1 score, for example.
In view of this point, are there any potential changes in the conclusion, or can you include more metrics?
[5] is a comprehensive, nice study on binary confusion matrix-based metrics and provide more detailed differences between accuracy, recall, F1, precision, specificity, and so on.
[5] The Impact Of Class Imbalance In Classification Performance Metrics Based On The Binary Confusion Matirx
---
Reply to Comment 1.1.1:
Title: Second Response for Reviewer E6os - Part 1
Comment: Dear Reviewer E6os,
Thank you once again for your thoughtful and comprehensive feedback, and for the additional references you provided. We appreciate the time you have invested in our work. Below, we address each of your points in detail:
**Azzuni+ Paper Reference:**
Thank you for providing the reference to the Azzuni+ paper. Unlike Azzuni et al., who suggest using SAM directly to combat imbalance without optimizing the perturbation per class, our modified SAM treats majority and minority classes differently, adjusting the decision boundaries specifically for minority classes.
**Batch Sizes and Imbalanced Datasets:**
Your explanations are accurate. To further investigate this issue, we conducted additional experiments, which are detailed in the general comment section. In summary, we found that smaller batch sizes can help mitigate overfitting, especially for minority classes. First, we analyzed the train/test errors for both minority and majority classes on Tiny-ImageNet using ConvNetXt, finding that minority classes tend to overfit significantly more with larger batch sizes; this overfitting decreases as batch size reduces, suggesting smaller batch sizes can help mitigate overfitting for minority classes. Second, we examined the variance of neural network parameter gradients, comparing balanced and imbalanced training scenarios. Our results show that imbalanced training results in lower gradient noise, particularly with larger batch sizes, which correlates with a higher propensity for models to overfit under imbalanced training conditions. Lastly, we investigated loss flatness through the Hessian spectrum, a well-known indicator of generalization and overfitting. Our findings reveal that, in imbalanced training, larger batch sizes lead to convergence at points with higher loss curvature, as characterized by the leading eigenvalue of the Hessian.
**Contrastive and Non-Contrastive SSL:**
Thank you for pointing out the paper. This paper claims that with more tuning of the InfoNCE parameters, the robustness of SimCLR and MoCo to small batches can be improved. Our hypothesis is that, as we mentioned before, for small batch sizes, along with serving as a regularizer, there is more noise in the gradient estimation which may lead to bad minima (especially in SSL, where the signal is weak compared to supervised learning). However, there are more ways to prevent high noise than increasing the batch size. We believe that exploring methods that combine the advantages of small batch sizes, which prevent overfitting, with solutions to the noise problem for these small batch sizes, is a very interesting future research direction.
**Transformer Results:**
We are pleased to hear that you found our additional experiments, including those on the Swin Transformer V2, to be satisfactory. We believe these results strengthen our paper's insights into various architectures under class imbalance.
**Language Models, Imbalance, and Batch Size:**
We appreciate the pointer to the work analyzing multilingual models under imbalance. This is indeed an interesting and relevant direction. We would like to point out that in language models (and actually in every domain where natural labels are not available), there may be several kinds of imbalanced data, not only across different languages. For example, there can be imbalances in different styles or structures of language. This phenomenon makes research on imbalanced data in language models much more complicated, but also very interesting, of course.
**Lesson 6 and Related Papers:**
Thank you for highlighting these papers. [3] indeed uses different hyperparameters for different classes to address the class imbalance. However, this method is much more complicated and requires extensive hyperparameter search based on the validation loss, and it even includes special data augmentation techniques. Moreover, as mentioned in our paper, there are many works which modify the loss function to behave differently for majority and minority classes. The main advantage of our label smoothing method is its simplicity and that it doesn't require specific hyperparameters designed specially for imbalanced data. Regarding [4], thank you for pointing out this paper. We agree that the class-dependent temperature technique is highly relevant to our work and can be seen as a different way of "smoothing" the output of the network. We believe that the main advantage of our smoothing method is its similarity to the current label smoothing method, which is common in balanced learning. In this case, you don't need to investigate this method each time for imbalanced training; instead, you can apply knowledge gained from balanced training. We now discuss these works in our updated working draft and appreciate your suggestions. | Summary: The paper studies the long-tail recognition problem and the impact of existing components of standard deep learning pipelines on the generalization performance, such as the batch size, data augmentation, architecture size, pre-training, optimizer, and label smoothing. They find that simply tuning those components can achieve state-of-the-art performance without any specialized loss functions or samplers.
Strengths: - Rethinking the effect of existing components in long-tailed recognition is a very important research topic.
- It is interesting to show that the minority class performance decreases at some point with the increase of the model size for imbalanced data.
- Extensive experiments are conducted on small-scale datasets.
Weaknesses: - The batch size experiment in Fig. 1 is not very convincing. What is the training method used? Is it ERM? Does this result hold across different methods? If it is ERM, I wonder what if you do cRT afterwards, will the same conclusion (data with a high degree of class imbalance tends to benefit from smaller batch sizes) hold?
- For data augmentation, it is also not clear what method is evaluated. Besides, data augmentation improves more on the minority classes is not new knowledge IMHO. There are even papers designing augmentations that strengthen the minor classes specifically, e.g. MFW [1] and TFE [2]. Also, to what extent does the claim "AutoAugment emerges as the most effective for imbalanced data" hold? Does it hold across different datasets, architectures, and training methods? From Fig.7, it is very convincing if AutoAugment outperforms TrivialAugment significantly since the error bar is not reported.
- Model architecture: I'm not sure that balanced and imbalanced accuracy are "virtually uncorrelated" is well supported as the performance difference between methods in Fig.3 right is very small. How stable are the results in Fig.3 right?
- What was the performance of ERM without the "tuned routines"? What is the performance improvement of each component in the refined ERM (batch size and data augmentation)? I think showing this ablation can be very helpful in understanding the paper.
[1] Procrustean training for imbalanced deep learning, ICCV 2021
[2] Co-Learning of Representation and Classifier for Imbalanced Semi-Supervised Learning, CVPR 2022
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see questions in Weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: - Since ImageNet has also been used as a standard dataset for long-tailed recognition for quite long time, I wondered whether the authors had any results on that to verify if their findings scale beyond CIFAR-10/100.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer nEkH,
Thank you for your thoughtful feedback, and we appreciate the time you took to provide your insight. We address your points below:
## Batch Size Experiment:
The training method used in Figure 1 is ERM. Following your comment, we found that cRT benefits from smaller batch sizes, whereas MiSLAS, a method designed for imbalance, benefits less (Figure 1 on the attached page). MiSLAS prevents overfitting to the minority class, acting as a regularizer.
Smaller batch sizes improve performance by acting as regularization in SGD, preventing overfitting to minority classes (see [1] for a theoretical analysis of batch size as a regularizer). This effect introduces noise, smooths the loss landscape, and is not prevented by cRT but by MiSLAS, reducing batch size's impact. These findings emphasize the complex relationship between batch size and training methods, offering insights for imbalanced data training. To further explore, we examined joint-SSL with CIFAR-10, showing that the dynamics change with imbalanced data. The conventional wisdom for SSL models may not directly apply, and batch size's impact is less pronounced with joint-SSL as an alternative regularizer, mitigating the
overfitting of minority classes.
## Data Augmentation:
We used ERM when we evaluated the data augmentation methods. We found that AutoAugment was most effective for imbalanced data for CIFAR10, CIFAR100, and CINIC-10 but not on balanced data. We appreciate your point about the error bars in Figure 7 and have updated our working draft for clarity.
Regarding the specific augmentation methods you mentioned (MFW and TFE), these are indeed valuable techniques. However, our focus was on analyzing the effect of standard, commonly-used high-performance augmentations in the context of imbalanced data rather than specialized augmentations designed for imbalanced data. Our findings provide useful insights for practitioners using standard augmentation techniques in their pipelines.
## Model Architecture:
Based on your comment, to check how stable the results are, we conducted additional experiments with different random seeds to assess the stability of our results. Specifically, we ran each model configuration with five seeds and added error bars to our figures representing one standard error. The resulting figure, which can be seen on the additional page, shows that the error bars are relatively small compared to the differences between the models, indicating that our results are indeed stable across seeds. Interestingly, although the differences in performance between models are small, there is no clear correlation between balanced and imbalanced performance across the models. This further emphasizes the complexity of the class imbalance problem and the importance of considering both balanced and imbalanced performance when designing models.
We believe these additional experiments strengthen our findings. We have added them to our working draft and will include them in the camera-ready version.
## Performance of ERM and Improvement of Each Component:
We concur with your suggestion that an ablation study would be beneficial in demonstrating the contribution of each component in our approach, so we have now added it to the table on the attached page. In general, without our proposed "tuned routines", the performance of ERM is somewhat lower than that of reweighting methods. Breaking down the impact of individual components, we find that data augmentation is the primary contributor to performance improvement, while the influence of batch size, though still beneficial, is less pronounced.
## More datasets:
We agree with your comment about adding more datasets. Prompted by your feedback, we have now expanded our experiments. We now include more challenging visual datasets, namely Tiny-ImageNet[2] and iNaturalist[3].. As can be seen in Table 1 on the attached page, our finely-tuned training routine, equipped with Joint-SSL, Asymmetric-SAM, and modified label smoothing, delivers performance that matches or surpasses that of previous state-of-the-art imbalanced training methods on these new datasets. We have also extended our analysis to tabular datasets. These datasets represent a challenging frontier for deep learning research. While tabular datasets often inherently exhibit imbalance, there has been limited research addressing the impact of imbalanced data on deep learning in this domain. We incorporated our suggested methods (SAM-A, modified label smoothing, and SGD with cosine annealing performed with a small batch size) on three tabular datasets - the Otto Group Product Classification [4], Covertype [5], and Adult [6] datasets using a Multilayer Perceptron (MLP) with improved numerical feature embeddings. Our methodology outperforms both XGBoost and recent state-of-the-art neural methods such as XGBoost, ResNet, and FT-Transformer [7] on all three datasets, illustrating the applicability of our findings beyond image classification. Please refer to Table 2 on the attached page for the full results.
Thank you again for your thoughtful review. We made a significant effort to address your feedback, including multiple experiments and paper edits, and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address.
## References:
[1] Sekhari et al., 2021. Sgd: The role of implicit regularization, batch-size and multiple-epochs.
[2] Le et al, 2015, Tiny imagenet visual recognition challenge.
[3] Map of Camille Isben's contributions to Biodiversity of Magnuson Park, Seattle, WA, project, 2017, iNaturalist
[4] Kaggle, 2015, Otto Group Product Classification Challenge.
[5] Blackard and Dean, 1990, Predicting forest cover types.
[6] Kohavi and Sahami, 1996, Error-based and entropy-based discretization.
[7] Gorishniy et al., 2021, Revisiting deep learning models for tabular data.
---
Rebuttal Comment 1.1:
Title: Additional Experiments Regarding the Role of Batch Size
Comment: Dear Reviewer nEkH,
Thank you for your thoughtful review. We sincerely hope that we have addressed your concerns and highlighted the novelty of our results. Beyond the experiments outlined in our previous rebuttal, which expanded our evaluation to new architectures (Swin Transformer V2, ConvNetXt) and datasets (Tiny-ImageNet, iNaturalist, and tabular datasets), assessed the stability of our result for the model architecture across different random seeds , and conducted a component-wise ablation study, we have now conducted further focused experiments.
These new experiments specifically examine the role of batch size in regularization during imbalanced training. In addition to the previous experiments, which responded to your comments and explored how batch size interacts with different training methods (specifically ERM, cRT, and MiSLAS), we now conducted experiments that are specific to the role of batch size as a regularizer in SGD, especially in preventing overfitting to minority classes.
Below, we describe these experiments in brief; for full details, please see the general comment.
**1. Train/Test Error Analysis for Minority and Majority Classes:**
We present the training and testing error as a function of batch size for both minority and majority classes on Tiny-ImageNet using ConvNetXt. The tables reveal that minority classes tend to overfit significantly more when using larger batch sizes. This overfitting noticeably reduces as the batch size decreases, suggesting that smaller batch sizes can mitigate overfitting, particularly for minority classes.
**2. Variance of Gradients Analysis for Balanced vs. Imbalanced Training:**
We analyze the variance of neural network parameter gradients, a crucial indicator of the network's learning dynamics and overfitting. Our experiment investigates the variance of gradients as a function of batch size for Tiny-ImageNet on ConvNetXt, comparing balanced and imbalanced training scenarios. The table vividly demonstrates that imbalanced training exhibits much lower gradient noise, especially as batch sizes grow larger, explaining why models are more prone to overfitting under imbalanced training conditions.
**3. Hessian Spectrum and Robustness:**
We examine loss flatness through the Hessian spectrum, a known indicator of generalization and overfitting. Recent work by Yao et al. (2018b) shows that large batch training leads to convergence to points with high curvature, characterized via the dominant eigenvalue of the Hessian, and suggests that high curvature can lead to poor generalization. To investigate this in the context of imbalanced training, we trained models using both balanced and imbalanced datasets for different batch sizes and calculated the top eigenvalue of the Hessian with respect to the model parameters. Our table shows that the top eigenvalue of the Hessian for imbalanced training increases much faster as we increase the batch size compared to balanced training, indicating a higher risk of overfitting, especially for imbalanced data.
These additional experiments offer concrete justification for the observations in our paper. While we cannot send the figures directly due to openreview constraints, we have included detailed tables in a general comment and have sent the corresponding figures to the AC.
Thank you once again for your constructive feedback. We have put significant effort into addressing your concerns, including conducting many new experiments based on your comments, to ensure the rigor and quality of our work. We would greatly appreciate it if you would consider raising your score accordingly.
---
Rebuttal 2:
Title: Addressing Reviewer Feedback
Comment: Dear Reviewer nEkH,
We have made a significant effort to address the questions and concerns raised in your review. This includes conducting many more experiments, such as delving deeper into the role of batch size in regularization during imbalanced training and the stability of our results as you suggested.
We kindly ask that you take these into consideration during your final assessment and consider raising your score accordingly.
As the review period draws to an end, please let us know if there are any further questions we can address.
Thank you
---
Rebuttal Comment 2.1:
Comment: I thank the authors for the detailed response and efforts made in running more experiments. The rebuttal addresses most of my concerns, so I would increase my final rating to weak accept. | Summary: This paper presents different approaches to enhance the performance of neural network classifiers over imbalanced datasets. Unlike most research focusing on specialized loss functions or resampling techniques, this study promises that state-of-the-art performance is achievable over the neural classifiers by simply tuning existing components of standard deep learning pipelines. Section three suggests that the study is evaluated over six image datasets and two datasets from UCI Machine Learning Repository, validated over ResNets and WideResNet classifiers, and compared with nine standard baselines. The reported evaluation measures include overall test accuracy and minority and majority class accuracy over the 20% of classes with the smallest and highest number of samples. The key observations from the study are -- (a) small batches perform better for the imbalanced data, (b) the role of augmentation as a regularizer is undeniable, but the performance over the minority class is sensitive to the chosen augmentation policy, (c) the larger networks are more susceptible to overfitting minority class in the case of imbalanced data, (d) models pre-trained on larger datasets tends to perform well in this case, (e) the integration of self-supervised loss with the supervised learning referred to as Joint-SSL performs well, (f) Sharpness-Aware Minimization (SAM-A) pulls decision boundaries away from minority samples, (g) whereas standard training routines overfits, and (h) applying more smoothing to minority class examples than majority class examples prevents overfitting on minority samples. Furthermore, integrating these findings with Joint-SSL and SAM-A atop M2m (SOTA) establishes new state-of-the-art performance across all (class-imbalanced) benchmark datasets, CIFAR-10, CIFAR-100, and CINIC-10. Some other important observations made in the study are (i) SGD optimizer performed uniformly better on imbalanced data, (ii) training on data that is more balanced than the testing distribution did not improve representation learning by preventing overfitting to minority samples, (iii) a low correlation was observed between the performance over the web-scrapped (benchmark) datasets and real-world datasets, and (iv) a correlation was observed between neural collapse and low test accuracy. Overall, this study emphasizes that simply tuning standard training routines can significantly improve performance in the case of imbalanced datasets.
Strengths: 1. This study approaches the significant and challenging problem of imbalanced learning.
2. The paper does extensive study and finds that state-of-the-art performance is achievable by finetuning the different components of existing deep learning pipelines.
3. It is interesting to investigate the imbalance problems using the proposed solutions, which are easy to understand.
Weaknesses: 1. The paper has bold statements at the beginning (resolving class imbalance in general) but has yet to explore the class imbalance issue in textual datasets.
2. The evaluation metrics used to assess the proposed solution revolve around accuracy.
3. Comparison with SMOTE is not reported but is mentioned as a baseline.
4. The dataset statistics, reflecting the nature of the imbalance in the dataset, need to be included.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. How does investigating train and test sets with varying imbalance ratios help?
2. Why are ResNets and WideResNet considered for experimentation out of so many neural network architectures?
3. Can the same behavior be expected in the case of vanilla neural networks and in the case of CNN, LSTM, and BiLSTM?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: 1. The paper does not propose the class imbalance issue in general.
2. The paper references other research work for experimental setup in many instances, which could have been avoided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer D6vk,
Thank you for your thorough and insightful review of our submission. We appreciate the time you've dedicated to evaluating our work and providing feedback. We address your points below:
## Exploration of Class Imbalance in Other Data Modalities:
While our study primarily focuses on image datasets, which present significant class imbalance problems, we agree that the issue is applicable to other modalities as well. Prompted by your feedback, we have now expanded our analysis to tabular datasets, which represent a challenging frontier for deep learning research. Despite tabular datasets often inherently exhibiting imbalance, limited research has addressed the impact of imbalanced data on deep learning in this domain. We incorporate our suggested methods (SAM-A, modified label smoothing, and SGD with cosine annealing performed with small batch size) to three tabular datasets - the Otto Group Product Classification [1], Covertype [2], and Adult [3] datasets using a Multilayer Perceptron (MLP) with improved numerical feature embeddings. Our methodology outperforms both XGBoost and recent state-of-the-art neural methods such as ResNet and FT-Transformer [4] on all three datasets, demonstrating the applicability of our findings beyond image classification. See the full results in Table 2 on the attached page. These additional results showcase the robustness of our findings, and we will include these updates in the camera-ready version.
## Evaluation Metrics:
The two most commonly used evaluation metrics under class imbalance are general accuracy, due to its simplicity and direct interpretability, and class-wise accuracy, where classes are grouped by the number of examples within each class [5,6]. These metrics provide comprehensive information about model behavior across different classes. In our work, we have reported these metrics both in the main text (for instance, in Figure 2) and in the appendix.
Prompted by your feedback, we have also incorporated the test likelihood error, which exhibits behavior similar to the aforementioned metrics. The inclusion of this metric provides an additional dimension to our evaluation, helping to assess the model's probabilistic performance on test data.
## Comparison with SMOTE:
Thanks for pointing out our oversight in not including SMOTE in the manuscript. Our study aims to compare our findings with relevant baselines to provide a comprehensive evaluation, and we will include SMOTE in our camera-ready version.
## Dataset Statistics:
We agree that detailed dataset statistics would be useful to include, so we have updated our draft accordingly.
## Investigating Train and Test Sets with Varying Imbalance Ratios:
In many real-world applications of machine learning, practitioners can curate data in a controlled fashion. Practitioners may choose, for example, to curate extra minority class samples or to curate a training set which reflects the balance they anticipate in the inference-time data. In this regard, the level of training imbalance can be a controllable hyperparameter, making it important to understand its effect on performance.
## Choice of ResNets and WideResNet:
We selected ResNets and WideResNet because they frequently appear in existing works involving class-imbalanced problems [7,8]. Prompted by your suggestion to explore other types of architectures, we have now conducted additional experiments using Swin Transformer v2 [9] and ConvNeXt [10]. These new models exhibit behaviors similar to the existing models, as displayed in the attached page. Additionally, we have now run various models on three tabular datasets[1-3], including MLP, FT-Transformer[4], ResNet, and XGBoost.
## Addressing Class Imbalance in General:
While our study provides numerous insights and methods to handle class imbalance, we agree that it is a broad and complex topic that cannot be fully addressed in a single paper.
Nevertheless, we believe our work contributes valuable pieces to the puzzle of handling class imbalance and provides a robust foundation for future investigations.
## References to Other Works for Experimental Setup:
We referenced other works for the experimental setup to provide readers with more context and to ensure reproducibility. We understand your comment about excessive referencing and will strive to strike a better balance in the camera-ready version.
Thank you again for your thoughtful review. We made a significant effort to address your feedback, including multiple experiments and paper edits, and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address.
## References:
[1] Kaggle, 2015, Otto Group Product Classification Challenge.
[2] Blackard and Dean, 1990, Predicting forest cover types.
[3] Kohavi and Sahami, 1996, Error-based and entropy-based discretization.
[4] Gorishniy et al., 2021, Revisiting deep learning models for tabular data.
[5] Cao et al., 2019. Learning imbalanced datasets with label-distribution-aware margin loss.
[6] Cui et al., 2019. Class-balanced loss based on effective number of samples.
[7] Dablain et al., 2023, Efficient augmentation for imbalanced deep learning.
[8] Johnson et al., 2020, Survey on deep learning with class imbalance.
[9] Liu et al., 2021, Swin transformer v2: Scaling up capacity and resolution.
[10] Liu et al., 2022, A convnet for the 2020s.
---
Rebuttal Comment 1.1:
Title: Additional Experiments Regarding the Role of Batch Size
Comment: Dear Reviewer D6vk,
Thank you for your review. We hope that we have successfully addressed your concerns and demonstrated the novelty of our results. In addition to the experiments that we outlined in the previous rebuttal, including more architectures such as Swin Transformer V2 and ConvNetXt, and changing datasets like Tiny-ImageNet and Inaturalist, we have conducted new experiments. These new experiments focus especially on the role of batch size in regularization during imbalanced training. Below, we briefly describe these experiments; for full details, including tables and figures, please see the general comments.
**1. Train/Test Error Analysis for Minority and Majority Classes:**
We examined the effects of varying batch sizes on the training and testing error for both minority and majority classes on Tiny-ImageNet using ConvNetXt. Our results indicate that larger batch sizes tend to increase overfitting, particularly for minority classes.
**2. Variance of Gradients Analysis for Balanced vs. Imbalanced Training:**
We investigated the variance of neural network parameter gradients as a function of batch size for Tiny-ImageNet on ConvNetXt, comparing balanced and imbalanced training scenarios. Our findings reveal that imbalanced training results in significantly lower gradient noise as batch sizes increase, which contributes to overfitting.
**3. Hessian Spectrum and Robustness:**
We analyzed the eigenvalues of the Hessian matrix in relation to batch size under balanced and imbalanced training conditions. The data indicates that imbalanced training leads to sharper minima (higher eigenvalues), especially with larger batch sizes, suggesting an increased risk of overfitting.
While we cannot send the figures directly due to openreview constraints, we have provided detailed tables summarizing the results in the general comments section and have sent the corresponding figures to the AC.
Thank you once again for your constructive feedback, and we look forward to your further comments. Please note that we have put significant effort into addressing your concerns, including conducting many new experiments based on your comments, to ensure the rigor and quality of our work, and we would greatly appreciate it if you would consider raising your score accordingly.
---
Rebuttal Comment 1.2:
Title: Re: Rebuttal by Authors
Comment: I thank the authors for the detailed response to my review. I appreciate the authors for performing additional experiments for my review. I also acknowledge having gone through my fellow reviewers' reviews and corresponding responses from the authors. I appreciate the authors for realizing the importance of evaluation metrics for classification models dealing with class-imbalanced data and presenting the results accordingly. SMOTE is a standard baseline for dealing with imbalanced datasets, and I would like to thank the authors for including it in their study. Accordingly, based on the authors' responses to my review and fellow reviewers, I am revising my score to 6. | Summary: In this paper, the authors study the problem of class imbalance. To be specific, they first investigate the effects of different hyper-parameters & design choices in an imbalanced setting. Moreover, the authors use such optimized settings with existing methods to show that significant improvements can be obtained.
After the rebuttal:
The authors addressed my concerns with detailed explanations and additional experiments. With those, I believe the paper provides useful insights for people working on class imbalance, therefore, I vouch for the paper's acceptance.
Strengths: 1. Easy to follow text.
2. Class imbalance is an important problem and therefore, analyses that offer more insights are valuable.
Weaknesses: 1. The most important issue I see with the paper is that its contribution is limited.
1.1. Out of the 6 Lessons (take-away points) in Introduction, Lessons 2-6 are already known in the literature. Only the analysis on batchsize is novel (as far as I know) and the results are intriguing.
1.2. The application of the tuned setting with the other methods does not offer any contributions.
2. More challenging datasets such as ImageNet-LT, iNaturalist are missing.
3. Regarding Lesson 1:
3.1. Lesson 1 (batchsize): The paper does not provide any intuition as to why batchsize has such an effect in an imbalanced setting.
3.2. Lesson 1: It would be worthwhile to see the same analysis with different architectures because, as we see in Figure 3, different architectures exhibit different behaviors under imbalance.
4. If I may, I suggest the authors to focus only on batchsize and provide solid & theoretical insights about why/how it affects.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See above.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your points below.
## Contribution:
While the methods we examine have been explored in the context of balanced training, our unique contribution lies in the in-depth analysis of these methods within the challenging environment of class imbalance. Our investigation resulted in an imbalance-specific pipeline which greatly improves the performance of existing methods. Our approach offers actionable insights for practitioners and enriches the understanding of how to approach class imbalance problems.
- For Lesson 2, while several methods have suggested tailored data augmentations for training with imbalanced data [1,2], to our knowledge, there has been no comprehensive analysis of how high-performance augmentations designed for balanced training affect minority classes. Importantly, previous works suggest using new, complex data augmentation techniques to deal with class imbalance. In contrast, our analysis suggests that existing data augmentation techniques can achieve state-of-the-art results.
- For Lesson 3, some works have demonstrated that increasing model size [3, 4] improves performance across many datasets in the balanced learning scenario. However, no work has analyzed the effect of model size on imbalanced data and its significant differences from balanced training.
- For Lesson 4, several works have proposed SSL pre-training for class imbalance [5, 6, 7]. Yet, none have suggested training with the same data, without the need for additional data, by integrating supervised training with SSL training. We show that this technique has major performance benefits.
- For Lesson 5, [6] suggests using SAM directly to combat imbalance without an optimal perturbation per class. In contrast, our modifying SAM treats majority and minority classes differently, adjusting the decision boundaries only for minority classes. Notably, parallel to our work, [8] suggested a more complex version of SAM that requires a two-stage process.
- For Lesson 6, [9] found that applying the same smoothing to all classes is ineffective for imbalanced datasets. However, our suggested method applies different degrees of smoothing to different classes based on imbalance. Namely, we apply more smoothing to the minority classes.
As detailed above, we are unaware of works that encapsulate these lessons. Can you please suggest references for lessons 2-6 already known in the literature?
## More Challenging Datasets and Models:
Prompted by your feedback, we have expanded our experiments and analyses to Tiny-ImageNet and iNaturalist, and to Swin Transformer v2 and ConvNeXt. Our suggested methods surpass SOTA imbalanced methods on these datasets. Moreover, these new architectures exhibit similar behavior to our previous experiments. We have also extended our analysis to tabular datasets, with our methodology outperforming SOTA methods. Please refer to the general rebuttal for full details.
## Batch Size
- **Insights on Batch Size:** Our study on batch size provides novel empirical results and intuitive reasoning. Specifically, we found that smaller batch sizes prevent overfitting, acting as a form of regularization in SGD (see [10] for the role of batch size). This effect introduces noise into gradient estimation and counteracting the tendency to overfit. Our findings pave the way for a new understanding of imbalanced data and offer practical guidance for selecting appropriate batch sizes.
- **Different Architectures for Lesson 1:** We agree with your point and have expanded our analysis to include more models.
- **Focus on Batch Size:** We appreciate your suggestion to concentrate solely on batch size. While we recognize the importance of batch size exploration, our paper aims to provide a holistic view of the class imbalance problem and associated practical recommendations. Our wide-ranging investigation, encompassing various hyperparameters and design choices, offers unique insights and caters to various real-world scenarios, offering large performance benefits for practitioners. To delve deeper into this issue, we examine its interaction with the joint-SSL method. While SSL models typically perform better with larger batch sizes for balanced datasets, our study, suggests conventional wisdom for SSL models may not apply in this setting. We trained our joint-SSL algorithm with different batch sizes on CIFAR-10 to verify this. As seen in Figure 1 of the provided page, the impact of batch size is less pronounced, with joint-SSL acting as an alternative regularizer. While we are dedicated to further exploring the role of batch size in future work, we continue to value the multifaceted contributions of our current paper.
Thank you again for your thoughtful review. We made an effort to address your feedback, including multiple new experiments and paper edits, and we would appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address.
## References:
[1] Dablain et al., 2023. Efficient augmentation for imbalanced deep learning.
[2] Temraz et al., 2022. Solving the class imbalance problem using a counterfactual method for data augmentation.
[3] Goldberg et al., 2020. Rethinking fun: Frequency-domain utilization networks.
[4] Zhai et al., 2022. Scaling vision transformers.
[5] Kotar et al., 2021. Contrasting contrastive self-supervised representation learning pipelines.
[6] Liu et al., 2021. Self-supervised learning is more robust to dataset imbalance.
[7] Yang et al., 2020. Rethinking the value of labels for improving class-imbalanced learning.
[8] Zhou et al., 2023. Class-Conditional Sharpness-Aware Minimization for Deep Long-Tailed Recognition.
[9] Shen et al., 2021. Is label smoothing truly incompatible with knowledge distillation: An empirical study.
[10] Sekhari et al., 2021. SGD: The role of implicit regularization, batch-size, and multiple-epochs.
---
Rebuttal Comment 1.1:
Title: Re: Author rebuttal
Comment: I would like to thank the authors for the detailed responses and the huge effort. I really appreciate it.
The authors have responded to my concerns and questions. The explanation for the cause of changing the batch size sounds logical but not demonstrated. This can be shown with one of the datasets and small models considered in the paper. With this, I would vouch for the acceptance of the paper. Otherwise, I find the insights and the results of the paper straightforward and unsuitable for a top-venue like NeurIPS without proper justifications. With this, I am increasing the my recommendation to borderline.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 8NJC16’s Second Comment - First Part
Comment: Dear Reviewer 8NJC,
Thank you for your thoughtful feedback and for recognizing the effort we put into addressing your concerns. We greatly appreciate your time and valuable insights.
Prompted by your feedback regarding the effect of batch size, **we have now conducted three additional experiments**. While we are unable to send the figure directly, we have included detailed tables summarizing the results. Additionally, we have sent the corresponding figures directly to the AC. Below are the details of these new experiments and our key observations:
**1. Train/Test Error Analysis for Minority and Majority Classes:**
In these tables, we present the training error and testing error as a function of the batch size for both minority and majority classes for Tiny-ImageNet on ConvNetXt.
As depicted in the tables, during training, we observe that **the minority classes tend to overfit significantly more when using bigger batch sizes**. This overfitting is noticeably reduced as the batch size increases, indicating that bigger batch sizes can lead to increased overfitting, particularly for minority classes.
| Batch Size | Train Error (Minority Classes) | Train Error (Majority Classes) | Test Error (Minority Classes) | Test Error (Majority Classes) |
|------------|--------------------------------|--------------------------------|-------------------------------|-------------------------------|
| 16 | 5.7 | 4.1 | 81.7 | 36.7 |
| 32 | 5.3 | 3.3 | 81.7 | 35.8 |
| 64 | 3.1 | 2.7 | 81.8 | 35.6|
| 128 | 1.7 | 2.2 | 81.9 | 35.2 |
| 256 | 0.9 | 1.9 | 82.0 | 34.6 |
| 512 | 0.4 | 1.6 | 82.2 | 34.6 |
| 1024 | 0.2 | 1.4 | 82.4 | 34.8 |
| 2048 | 0.1 | 1.3 | 82.7 | 34.9 |
| 4096 | 0.06 | 1.3 | 83.1 | 34.9 |
**2. Variance of Gradients Analysis for Balanced vs. Imbalanced Training:** The variance of neural network parameter gradients serves as a crucial indicator of the network's learning dynamics, reflecting the stability and noise in the parameter updates during training.
This variance reveals a trade-off between generalization and overfitting; specifically, lower gradient variance often coincides with overfitting, while higher variance can introduce a form of implicit regularization, as is a hypothesized benefit of SGD, promoting better generalization at the cost of noisier updates.
For this experiment, we investigate the variance of the gradients as a function of the batch size for Tiny-ImageNet on ConvNetXt, comparing two distinct training scenarios: balanced and imbalanced training.
The table vividly demonstrates that **imbalanced training exhibits much lower gradient noise, especially as batch sizes grow larger**. This phenomenon highlights why models are more prone to overfitting under imbalanced training, as the lower gradient noise enables the model to overfit more tightly to the training data without regularization.
| Training Type | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | 4096 |
|--------------------|------|------|------|------|------|------|------|------|------|
| Balanced Training | 8e-2 | 7e-2 | 4e-2 | 9e-3 | 7e-3 | 5e-3 | 1e-3 | 8e-4 | 6e-4 |
| Imbalanced Training| 4e-2 | 2e-2 | 9e-3 | 8e-4 | 7e-5 | 3e-5 | 9e-6 | 3e-6 | 3e-7 |
**3 Hessian spectrum and robustness:**
Another indicator of generalization and over fitting is loss flatness, for example measured by Hessian singular values. Recent work by Yao et al. (2018) uses the Hessian spectrum and shows that large batch training leads to convergence to points with high curvature. They characterize curvature via the dominant eigenvalue of the Hessian and suggest that high curvature can lead to poor generalization. In order to investigate this issue in the context of imbalanced training, we train models using both balanced and imbalanced datasets for different batch sizes and calculate the top eigenvalue of the Hessian with respect to the model parameters. In the table below, we can see that **the top eigenvalue of the Hessian for imbalanced training increases much faster as we increase the batch size compared to balanced training.** This suggests that higher batch sizes increase the risk of overfitting, and this phenomenon is worse for imbalanced data.
| Training Type/Batch Size | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | 4096 |
|--------------------|------|------|------|------|------|------|------|------|------|
| Balanced Training | 12 | 21 | | 39 | 109 | 173| 291 | 404 | 578 |
| Imbalanced Training| 16 | 37 | | 69 | 159 | 253| 396 | 739 | 1428 | | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your thoughtful and detailed reviews of our work. We appreciate your time and the constructive feedback you have provided. We have carefully considered your comments and concerns and would like to address them in a unified response:
**1. Diversity of Architectures and Additional Experiments and Results:**
In response to your feedback, we expanded our experiments to include different models, such as Swin Transformer v2[1] and ConvNeXt[2], and more challenging visual datasets, like Tiny-ImageNet[3] and iNaturalist[4]. The results of these additional experiments consistently align with our previous findings, which enhances the robustness and general applicability of our work. Furthermore, these enhancements increase the diversity of our architectural analysis, contributing to a broader understanding of various architectures' behavior under imbalance.
**2. Inclusion of Tabular Datasets:**
Recognizing the significance of class imbalance in various domains, we have extended our analysis to incorporate tabular datasets. This addition enriches our research, as tabular datasets often inherently exhibit imbalance and have been less explored in the context of deep learning. Our methodology was applied to three tabular datasets[5-7], demonstrating its effectiveness beyond image classification. Our methodology outperforms both XGBoost and recent state-of-the-art neural methods [8] on all three datasets. These results underscore the robustness of our findings and their relevance to real-world applications.
**3. Novelty and Contribution:**
Our work demonstrates that techniques designed for class-balanced data can easily be tuned or adapted to class-imbalanced data for state-of-the-art results, even outperforming methods specifically designed for imbalance. We show that existing data augmentation techniques, SSL, SAM, and label smoothing, which are commonly used in balanced setups, can be effectively applied to imbalanced training. Moreover, we provide critical insights into the performance of imbalance-specific methods in real-world scenarios, illustrating a lack of direct correlation between their performance on conventional benchmark datasets and their effectiveness on naturally imbalanced datasets. This finding encourages a broader evaluation of methods and contributes to our understanding of class imbalance training. Our work represents a perspective-shift on training under class imbalance but moreover, we achieve significant performance gains across multiple benchmark and real-world datasets.
**4. Batch Size and Interaction with Joint-SSL:**
Prompted by your comments on our study of batch size, we now delve deeper in this direction. We find that smaller batch sizes tend to yield better performance by preventing overfitting on minority classes, acting as a form of regularization in SGD[9]. This effect introduces noise into gradient estimation, smoothing the loss landscape and counteracting the tendency to overfit to minority examples. Furthermore, we investigated the interplay between batch size and joint-SSL, an aspect of regularization. While SSL models usually perform better with larger batch sizes under balanced datasets, our study shows that the dynamics change when handling imbalanced data. We discovered that the effect of batch size is less significant when using joint-SSL, which acts as an alternative regularizer, mitigating the overfitting of minority classes. These findings highlight the complex interplay between batch size and regularization methods, providing valuable insights for practitioners dealing with imbalanced data.
We have put forth a significant effort to address all your feedback and feel that your suggestions have improved our work. We welcome further discussion, and we appreciate your valuable input.
### References:
[1] Liu et al., 2021, Swin transformer v2: Scaling up capacity and resolution.
[2] Liu et al., 2022, A convnet for the 2020s.
[3] Le et al, 2015, Tiny imagenet visual recognition challenge.
[4] Map of Camille Isben's contributions to Biodiversity of Magnuson Park, Seattle, WA, project, 2017, iNaturalist.
[5] Kaggle, 2015, Otto Group Product Classification Challenge.
[6] Blackard and Dean, 1990, Predicting forest cover types.
[7] Kohavi and Sahami, 1996, Error-based and entropy-based discretization.
[8] Gorishniy et al., 2021, Revisiting deep learning models for tabular data.
[9] Sekhari et al., 2021. SGD: The role of implicit regularization, batch-size, and multiple-epochs.
Pdf: /pdf/6c43ce38b9e25e05d65e7b09aca5bd76b3eaad20.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors suggest tackle the class imbalance issue from a hyperparameter optimization perspective. Throughout an extensive empirical study, they raise questions about the behavior of well-established techniques for balanced data under long-tail data distribution. The synergy of the resulting prescriptions is also checked on several datasets
Strengths: * This paper introduces a new approach to tackle the class imbalance issue by optimizing several hyperparameters. The paper is clearly written, The claims are well-stated and sufficiently supported empirically in most of the cases.
* The authors tried to sketch intuitive explanations for some 'unexpected' outcomes
* The authors reported the performance of several combinations of their micro-receipes and show strong results on several datasets.
Weaknesses: * The content of this paper is clearly going beyond the maximum allowed number of pages. To not violate the rules, the authors decided to move very relevent parts of the paper to the appendix which makes the paper less readable. Nevertheless, I insist that this action does not compromise the clarity. I would have been nicer to cut the less important components and limit the scope of the paper to the most important hyperparameters.
* The novelty of this paper is limited some 'findings' are valid for the balanced setup as well (pretraining, SSL) or just trivial (AutoAugment)
* For a purily empirical paper, it is important to diversify the architectures in order to draw more valuable conclusions. I would have loved to see the performance of a transformer based model under different contraints
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: From one side, the batch size should not grow to much but on the other side, applying a SSL helps. It is known that SSL models tend to perform better for larger batch sizes and balanced datasets. Does this rule of thunb apply as well for unbalanced data?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kUXT,
Thank you for your detailed and thorough evaluation of our submission. We appreciate your comments and address your concerns and questions below.
## Page Limit and Readability:
We acknowledge your concern about the content exceeding the page limit and agree that moving some parts to the appendix might have affected the readability. Our intention was to provide as much detail as possible to support our claims, while adhering to the page limit. We understand that some parts of the paper might be less critical to the central argument and can be omitted or simplified. We have now split Section 4 into two sections based on the types of methods: (1) the first ones (batch size, data augmentation, pre-training, model architecture, optimizers), which serve as the fundamental 'building blocks' of regular balanced training. For this group, we investigated how altering their hyperparameters impacts the performance of imbalanced training. (2) The second ones are optimization methods that we slightly modified to address imbalanced training (Joint-SSL, SAM, and label smoothing). We examined the effects of these methods on performance only in the next section. Furthermore, we are revising the paper to improve the balance between detailed explanation and readability, focusing on the most crucial aspects of our approach. Additionally, we are open to revising the paper even further to improve the balance between detailed explanation and readability.
## Novelty:
We believe the value of our paper lies in the new perspectives we provide on class-imbalanced training. We demonstrate the benefits of our simple and easy-to-use tuned or imbalance-adapted training techniques, specifically in the context of class imbalance, which was previously dominated by specialized loss functions or samplers designed specifically for class imbalance. Our work represents a perspective-shift on training under class imbalance, but moreover, we achieve significant performance gains across multiple benchmark and real-world datasets.
## Diversity of Architectures:
Our submission focused on traditional architectures due to their prevalent use in the class imbalance literature. However, we agree that the analysis would be enriched by considering more architectures like transformers. Prompted by your feedback, we have now added additional experiments. We ran our analysis on more models - Swin Transformer v2 [1] and ConvNeXt [2]. These new models achieve similar behavior to the current models. (see Table 1 on the attached page). Moreover, we expanded our analysis to tabular datasets. These datasets represent a challenging frontier for deep learning research. Despite tabular datasets often inherently exhibiting imbalance, limited research has addressed the impact of imbalanced data on deep learning in this domain. We incorporate our suggested methods (SAM-A, modified label smoothing, and SGD with cosine annealing performed with a small batch size) to three tabular datasets - the Otto Group Product Classification [3], Covertype [4], and Adult [5] datasets using a Multilayer Perceptron (MLP) with improved numerical feature embeddings. Our methodology outperforms both XGBoost[REF] and recent state-of-the-art neural methods such as XGBoost, ResNet, and FT-Transformer [6] on all three datasets, demonstrating the applicability of our findings beyond image classification. See the full results in Table 2 on the attached page). These additional results showcase the robustness of our findings and further substantiate our conclusions.
## Question on Batch Size and SSL:
Your observation about the relationship between batch size and SSL is indeed intriguing. SSL models tend to perform better with larger batch sizes under balanced datasets. However, the dynamics change when it comes to imbalanced data. Our study on batch size provides both novel empirical results and intuitive reasoning. Specifically, we found that smaller batch sizes prevent overfitting on minority classes, acting as a form of regularization in SGD (see [7] for an analysis of the role of batch size as a regularizer for the loss landscape which encourages generalization). This effect introduces noise into gradient estimation, smoothing the loss landscape, and counteracting the tendency to overfit to minority examples. This suggests that the rule of thumb for SSL models might not directly apply to imbalanced data. To test this, we trained our joint-SSL algorithm using different batch sizes on CIFAR-10. As can be seen in Figure 1 on the provided page, the effect of the batch size is less significant, with joint-SSL behaving as an alternative regularizer, preventing the overfitting of the minority classes.
Thank you again for your thoughtful review. We made an effort to address your feedback including multiple experiments and paper edits, and we would greatly appreciate it if you would consider raising your score in light of our response. Please let us know if you have additional questions we can address.
## References:
[1] Liu et al., 2021, "Swin transformer v2: Scaling up capacity and resolution".
[2] Liu et al., 2022, "A convnet for the 2020s".
[3] Kaggle, 2015, "Otto Group Product Classification Challenge".
[4] Blackard and Dean, 1990, "Predicting forest cover types".
[5] Kohavi and Sahami, 1996, "Error-based and entropy-based discretization".
[6] Gorishniy et al., 2021, "Revisiting deep learning models for tabular data".
[7] Sekhari et al., 2021. "SGD: The role of implicit regularization, batch-size, and multiple-epochs".
---
Rebuttal Comment 1.1:
Title: Additional Experiments Regarding the Role of Batch Size
Comment: Dear Reviewer kUXTr,
Thank you for your review. We hope that we have successfully addressed your concerns and demonstrated the novelty of our results. In addition to the experiments that we outlined in the previous rebuttal, including more architectures such as Swin Transformer V2 and ConvNetXt, as well as changing datasets like Tiny-ImageNet and Inaturalist, we have conducted new experiments. **These new experiments focus especially on the role of batch size in regularization during imbalanced training**. Below, we describe these experiments in brief; for full details, please see the general comments.
**1. Train/Test Error Analysis for Minority and Majority Classes:**
We present the training and testing error as a function of batch size for both minority and majority classes on Tiny-ImageNet using ConvNetXt. The tables reveal that, during training, minority classes tend to overfit significantly more when using bigger batch sizes. This overfitting noticeably reduces as the batch size increases, indicating that larger batch sizes can lead to increased overfitting, particularly for minority classes.
| Batch Size | Train Error (Minority Classes) | Train Error (Majority Classes) | Test Error (Minority Classes) | Test Error (Majority Classes) |
|------------|--------------------------------|--------------------------------|-------------------------------|-------------------------------|
| 16 | 5.7 | 4.1 | 81.7 | 36.7 |
| 32 | 5.3 | 3.3 | 81.7 | 35.8 |
| 64 | 3.1 | 2.7 | 81.8 | 35.6|
| 128 | 1.7 | 2.2 | 81.9 | 35.2 |
| 256 | 0.9 | 1.9 | 82.0 | 34.6 |
| 512 | 0.4 | 1.6 | 82.2 | 34.6 |
| 1024 | 0.2 | 1.4 | 82.4 | 34.8 |
| 2048 | 0.1 | 1.3 | 82.7 | 34.9 |
| 4096 | 0.06 | 1.3 | 83.1 | 34.9 |
**2. Variance of Gradients Analysis for Balanced vs. Imbalanced Training:**
We analyze the variance of neural network parameter gradients, a crucial indicator of the network's learning dynamics. This variance reveals a trade-off between generalization and overfitting. Our experiment investigates the variance of gradients as a function of batch size for Tiny-ImageNet on ConvNetXt, comparing balanced and imbalanced training scenarios. The table vividly demonstrates that imbalanced training exhibits much lower gradient noise, especially as batch sizes grow larger, explaining why models are more prone to overfitting under imbalanced training conditions.
| Training Type | 16 | 32 | 64 | 128 | 256 | 512 | 1024 | 2048 | 4096 |
|--------------------|------|------|------|------|------|------|------|------|------|
| Balanced Training | 8e-2 | 7e-2 | 4e-2 | 9e-3 | 7e-3 | 5e-3 | 1e-3 | 8e-4 | 6e-4 |
| Imbalanced Training| 4e-2 | 2e-2 | 9e-3 | 8e-4 | 7e-5 | 3e-5 | 9e-6 | 3e-6 | 3e-7 |
**3. Hessian Spectrum and Robustness:**
We examine loss flatness through the Hessian spectrum, a known indicator of generalization and overfitting. Recent work by Yao et al. (2018b) shows that large batch training leads to convergence to points with high curvature, characterized via the dominant eigenvalue of the Hessian, and suggests that high curvature can lead to poor generalization. To investigate this in the context of imbalanced training, we trained models using both balanced and imbalanced datasets for different batch sizes and calculated the top eigenvalue of the Hessian with respect to the model parameters. Our table shows that the top eigenvalue of the Hessian for imbalanced training increases much faster as we increase the batch size compared to balanced training, indicating a higher risk of overfitting, especially for imbalanced data.
| Training Type/Batch Size|16| 32|64|128|256|512|1024|2048
|--------------------|------|------|------|------|------|------|------|------
| Balanced Training|12|21| 36|112|173|291|404|571|
| Imbalanced Training|15|37| 68|159|253|396|739|1428|
These additional experiments respond directly to your comments, offering concrete justification for the observations in our paper. While we cannot send the figures directly due to openreview constraints, we have included detailed tables summarizing the results and have sent the corresponding figures to the AC.
Thank you once again for your constructive feedback, and we look forward to your further comments. Please know that we have put significant effort into addressing your concerns, including conducting many new experiments based on your comments, to ensure the rigor and quality of our work, and we would greatly appreciate it if you would consider raising your score accordingly.
---
Reply to Comment 1.1.1:
Title: Addressing Reviewer Feedback
Comment: Dear Reviewer kUXT,
We have made a significant effort to address the questions and concerns raised in your review. This includes conducting many more experiments, such as delving deeper into the role of batch size in regularization during imbalanced training.
We kindly ask that you take these into consideration during your final assessment and consider raising your score accordingly.
As the review period draws to an end, please let us know if there are any further questions we can address.
Thank you | null | null | null | null | null | null |
Stochastic Collapse: How Gradient Noise Attracts SGD Dynamics Towards Simpler Subnetworks | Accept (poster) | Summary: The paper studies the properties of the stochastic gradient descent using its continuous time SDE approximation. The invariance of the stochastic dynamics in the representation space are investigated and rigorous condition where such invariance sets acts as attractors of SDE are derived. Furthermore, it is empirically validated that neurons collapse towards these invariant sets and the generalization aspects of this phenomenon is studied in two-layer linear networks.
Strengths: a) The paper tackles a very important question of understanding the regularization properties of stochastic algorithms. It formally characteries the two invariant properties of the stochastic algorithms on deep forward networks and established the conditions on the drift and variance terms of the SDE where these sets acts as attractors. In addition, in one-dimensional and two-dimensional settings the conditions are interpreted.
b) The above formalism established is novel and is useful to study the empirical behaviour of the deep networks and the role of the learning rate and its influence on generalization.
Weaknesses: a) The paper uses the label noise gradient descent and stochastic gradient descent interchangeably. However, note that there are two different algorithms, hence it should be explicitly stated for which algorithm the results are applicable. Although they share some similar properties, they also differ particularly in the strength of the noise Andriushchenko et al., Ziyin et. al. The paper appears sloppy in this regard.
b) Although the paper establishes a more formal treatment the phenomenon of sign invariance and permutation invariance. Andriushchenko et al. provides empirical invariance and theoretical intuitions on same phenomenon.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: a) In the results for evaluation of singular values of linear networks, the set of assumptions used appear strong. Particularly, the assumption that the assumption that noise in labels align with input-output covariance. Can it be verified that the noise of SGD satisfies this assumption ?
b) The results in Figure 3 and Theorem 6.1 have been studied when label noise is introduced in the algorithm, how does these changed when using vanilla mini-batch SGD? The conclusion that large step size improve generalization is therefore cannot be generalized to mini-batch SGD.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are already addressed in the sections above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and the suggestions to make our work better. We will discuss the weaknesses and questions of our paper you pointed out:
**Weakness A**: We appreciate your valuable observation of the confusions that arose from our interchangeable use of noise introduced by minibatch SGD and label noise gradient descent (LNGD). This is important and we are committed to enhancing our paper in this regard. We firmly believe that the improvement we discuss below in each section will make our claims more solid:
- Sec. 3: The existence of sign/permutation invariant sets is valid for both SGD and LNGD. Hence we can discuss these in the same scheme without any problem.
- Sec. 4: Here, we argue that SGD can be seen as GD with a noisy term, leading us to study the stochastic gradient flow (SGF) as defined in Equation (4). Our theoretical analyses in Sec. 4 are firmly based on SGF. SGF can describe both SGD and LNGD, while their diffusion terms may have different structure.
- Sec. 5,6: In these sections, we apply the theoretical results in Sec. 4 on SGF. We recognize that these sections need to be improved since we restrict ourselves to the case where the noise of SGF comes from the label noise, and hence we lose the generality of SGF. We used the label noise instead of the minibatch noise for the reason that label noise strength can be directly linked to the noise amplitude in SGF. However, in both these sections the analysis does not change if we let $\zeta$ be the mini-batch gradient noise amplitude. The other source of confusion might come from our empirical results, which use label noise (for the same reason discussed above). We now also include a new empirical figure (Fig. S3 in the attached PDF), in which we use SGD minibatch noises to validate the outcomes previously demonstrated solely with label noise (Fig. 3). Note that Fig. 1, the top of Fig. 2 and Fig. 4 do **not** use label noise, indicating strong stochastic collapse for minibatch SGD.
- Appendix: In the appendix of our revised manuscript, we will add a section on the similarities and differences (both theoretically and empirically) between SGD and LNGD.
We believe these improvements will greatly enhance our treatment of the subject and we hope they will positively influence your evaluation of our work.
**Weakness B**: We thank the reviewer for pointing out the similarity between our analysis and the previous work by Andriushchenko et al., which we cited and discussed (lines 71-73) in our paper. Given the similarity to our work, we will add more discussions in our paper to address the substantial differences. The novelty of their work can be summarized as: 1. They revealed that large-step SGD dynamics have effective slow dynamics with multiplicative noise during loss stabilization. This allowed the authors to model the dynamics as a specific SDE, justified with empirics. 2. They theoretically analyzed diagonal linear networks, showing that the SDE has implicit bias toward sparser representations. 3. They conjectured that deep nonlinear networks also show this phenomenon, which is supported by the empirical results. 4. They argued that SGD first learns sparse features and then fits the data after the step size is decreased. With this foundation laid, we would like to emphasize two fundamental aspects that set our work apart:
1. Our analysis goes far beyond the basic understanding of diagonal linear networks, offering a broader perspective by introducing the invariant sets with a more general theorem (Theorem 4.1). In their work, the authors conjectured an implicit bias towards sparser features beyond their diagonal linear model, leaving this exploration open as an "_exciting avenue for future work_". We believe that our paper makes substantial progress along this avenue by understanding the implicit bias towards sparse representations as stochastic collapse to the invariant sets. Furthermore, our theory is applicable to general deep neural nets. We also provide a theoretical framework that predicts the quantitative conditions for this attractiveness in a general setting. While applying this theorem to arbitrary neural nets is a challenging future work, our work utilized this theorem to quantitatively analyze the collapsing threshold of simple models.
2. Our results shed light on the collapsing phenomena of weight vectors, a perspective that contrasts with their emphasis on neuron similarity based on activation patterns. Given that weight vector similarities often lead to similarity in activation patterns, our findings suggest that neural networks adhere to a stronger condition.
**Question A**: We would like to point out that A1, A3, and A4 are standard assumptions used in the canonical papers [Saxe et al. 2014] and [Lampinen & Ganguli 2018], which establish the linear-teacher student model as a setting for studying generalization. As the reviewer rightly points out, the assumption that gradient noise aligns with input-output covariance (A2) is the new assumption we added such that we could derive analytical expressions for the theory when incorporating the new ingredient of stochasticity to the analysis. The technical difficulties for relaxing assumption A2 come from the problem that stochastic noises can perturb the eigenbasis, which adds an extra degree of complexity to the problem. Although we agree that relaxing those assumptions is theoretically interesting and important, we want to point out that our theory with those assumptions captures the key behavior of the general SGD dynamics. We now include additional empirical results (Fig. S3 in the attached PDF) which verifies that the key observed phenomena hold even without those assumptions and on minibatch SGD.
**Question B**: The answer to this question is addressed in our response to Weakness A. In summary, the conclusion that large step sizes improve generalization does apply to mini-batch SGD without label noise (see Fig. 4 and Fig. S3 in the attached pdf).
---
Rebuttal Comment 1.1:
Title: Reply to Author Rebuttal
Comment: Thank you for your reply. My score remains unchanged. A few comments on the rebuttal below.
Weakness A, Sec 5,6: Yes, I agree that you can set $\zeta$ to mini-batch noise amplitude however then it depends on (t) will not be constant and can be vanishing as the time progresses thus showing that collapse might not be stronger. It would be nice to make this point clear.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and feedback, which has been very helpful in improving our work.
The mini-batch noise should not depend on “t” directly, but rather implicitly through a dependency on position. The SDE used in our analysis in section 5 and 6 accounts for an aspect of this positional dependency through the multiplicative structure. That said, you are correct that there can be a further positional dependency unaccounted for in $\zeta$ when considering SGD instead of LNGD. For example, if an interpolating point exists, that is there exists some point $\theta_*$ where $\mathcal{L}(\theta_*) = 0$, then clearly the mini-batch noise amplitude near $\theta_*$ should be much smaller than at a random initialization. One way to account for this would be to insert a dependency on the training loss $\mathcal{L}(\theta)$ into $\zeta$. Unfortunately, this would make the analysis very complex and potentially intractable (we are currently working on if Theorem 5.1 can be extended in this manner). If the training data is not interpolatable, then there is always a positive lower-bound on $\zeta$, and we don’t expect the main lessons from this section would change drastically. As highlighted previously, the experiments for both these sections match well with the theory when run with SGD. This said, we should make it more clear what factors influence $\zeta$ in SGD and why we assume $\zeta$ is positionally independent in order to make the analysis tractable.
We hope that the changes we discussed in our rebuttal and the above discussion alleviate your concerns around Weakness A in your review. Have we addressed your other concerns and questions? Is there an aspect of our work that is holding you back from raising your score? Please let us know if you have any more questions. We look forward to hearing from you. And again, thank you for your time and feedback. | Summary: This paper demonstrate a low-dimensional invariant sets, namely a subset of parameter space, may remain unmodified by SGD. It means that SGD dynamics may lead to simple subnetworks. The theoretical mechanism behind is formally introduced. Moreover, the derived theoretical results revealed that the so-called stochastic collapse may influence generalization in a simplified linear teacher-student setup.
Strengths: - The discovered stochastic collapse is interesting for understanding optimization dynamics of deep learning. Some counterintutive insights are provided to our community.
- Formal theoretical analysis with detailed proofs are presented for understanding stochastic collapse. It suggests that stochastic collapse can be precisely analyzed under certain assumptions.
- The empirical evidence is direct and informative.
Weaknesses: - I think this paper needs to significantly improve the literature review and discussion. Some closely related studies (e.g. [1,2]) are absent. [1] demonstrated that SGD favor flat minima and SGD dynamics may happen in a low-dimensional parameter space. It will be helpful to explain how the simple subnetworks differs from and relates to the low-dimensional learning subspace in the previous studies. How close are the two low-dimensional subspaces? Will they provides some different insights?
- Generalization theoretical analysis is an important part of this paper. While the paper claimed that generalization benefits can be theoretical influenced by stochastic collapse, I did not see really novel or quantitative insights. If the theoretical results are really precise and informative, why not design some novel methods to improve generalization with theoretical supports? For example, if the role of early large learning rates can be well understood in the theory, it is possible to schedule the learning rate in a theory-guided manner that can work well in practice? If not possible, why?
Reference:
[1] Xie, Z., Sato, I., & Sugiyama, M. (2020, October). A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima. In International Conference on Learning Representations.
[2] Gur-Ari, G., Roberts, D. A., & Dyer, E. (2018). Gradient descent happens in a tiny subspace. arXiv preprint arXiv:1812.04754.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: I did see any potential negative societal impact.
The main limitation of this work lies its technical contributions, as I mentioned in the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and the time you spent suggesting ways to make our work better. We discuss the weaknesses of our paper you raised:
**Weakness 1**: We appreciate the referenced works and will incorporate them into our study. We have updated our related work section to cite these works. We welcome any further suggestions on references or other concerns. Here we will focus specifically on these two works and their relationship to our work:
Xie, Z., Sato, I., & Sugiyama, M.: We acknowledge the significance of this prior research, which connects the covariance matrix of SGD with the Hessian around local minima and demonstrates that the escape rate from minima is linked to the Hessian eigenvalues or flatness. Furthermore, the authors briefly discussed that minima selection primarily occurs in a low-dimensional subspace spanned by the top Hessian eigenvectors. Our works both expands and diverges from this analysis in a couple of key ways:
1. New concepts: We introduce the concept of sign/permutation invariant sets, bridging low-dimensionality of SGD dynamics with the sparsity of the network representations. In the vicinity of the invariant sets, the Heissan along orthogonal directions has near-zero eigenvalues, and hence we believe that the low-dimensional subspace discussed by Xie, Z. et al. is nested in one of these invariant sets we introduced. Verifying this hypothesis is interesting future work.
2. Distinct Mechanism: We reveal the mechanism on why the low-dimensional invariant sets can be attractive. This mechanism is distinct from the previous analyses for two reasons. First, their work is based on the Kramers Escape Problem. Contrarily, in our investigation, the stochastic attractivity comes from the multiplicative nature of the noise and the effective entropy of invariant sets, which means that the escape rate doesn't bear direct relevance in our context. Second, while the paper briefly discussed SGD's biases towards low-dimensional subspaces, their analyses are built on the empirical fact that the Hessian has many near-zero eigenvalues with a few large ones. The theoretical explanation of this empirical fact is not provided. In contrast, our analyses are not tied to specific assumptions about the Hessian spectrum, marking a different analytical perspective.
Gur-Ari, G., Roberts, D. A., & Dyer, E: This insightful work shows that the Hessian splits into two slowly varying subspaces during the course of training and the gradient lives in the subspace spanned by the principal eigenvectors. They also highlight the exploration of the "_transparent meaning_" of these eigenvectors for future research. In line with this, our investigation suggests a potential correlation between the low-dimensional space and the invariant sets. We consider this connection a promising area for exploration and believe it could contribute to understanding the "_transparent meaning_" that the previous work hinted at.
**Weakness 2**: We agree that designing improved algorithms based on our insights into stochastic collapse is a major goal for our future work. Nonetheless, the goal of this work was to highlight a novel, not yet well understood (and indeed undiscovered) implicit bias of SGD, stemming from the interplay between stochasticity and sparsity. This said, we are actively developing learning rate schedules that use the understanding of stochastic collapse defined and presented in this work. We share some additional thoughts on this. As detailed at the end of Section 6 (lines 312-328) and Fig. 4, we found that a prolonged initial training phase with larger learning rates are beneficial for subsequent generalization due to a bias toward sparser representations. Inspired by our theory, we empirically counted the number of independent neurons as a measure of sparsity. We conjecture that this is a good indicator for when to drop down the learning rates. The prolonged training with larger learning rates is beneficial as long as it continuously reduces the network sparsity. One idea for the algorithm is to measure the number of independent neurons (can be done sporadically to reduce computational costs) and drop down the learning rates when this number plateaus. One technical challenge here is to define a single variable to quantify the entire network's simplicity rather than layer-by-layer. This challenge, along with benchmarking the algorithm across various datasets, demands substantial effort and extends beyond this paper's scope. Nevertheless, we recognize the significance of such algorithmic design and embrace it as an essential future endeavor, made possible by our theory/empirics!
Besides, we would like to point out that our work contributes novel and quantitative insights into the effect of stochastic collapse on generalization, explored within a linear teacher-student setting. Our analysis highlights an interesting tradeoff between maintaining enough gradient noise to induce stochastic collapse in directions with weak signal-to-noise ratios, while also reducing this noise to facilitate annealing in directions with robust signal-to-noise ratios. Also, it reveals the surprising finding that SGD can get attracted to **higher** training error saddle points (student with some zero singular values) that nevertheless have **lower** test error than local minima (student with all nonzero singular values) with lower training error.
In summary, we believe our paper provides both novel theoretical insights and a foundation for future practical applications. While we recognize the value and necessity of translating these insights into algorithmic designs, the work presented here primarily serves as a theoretical/scientific experiment work. Your feedback has been instrumental in refining our focus, and we hope this response successfully addresses your queries.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: Thanks for the response.
It helps address some of my concerns.
I plan to raise the score at this point.
---
Reply to Comment 1.1.1:
Comment: We are happy to hear that our response resolved some of your concerns and we are glad to know you plan on raising your score. Please let us know if you have any questions. | Summary: This paper studies the implicit bias of SGD, and provides a new perspective by characterizing the invariant set in the parameter space along the SGD dynamics, corresponding to specific model architectures. It is shown that SGD noise induces stochastic attractivity towards such invariant sets, which leads to simpler subnetworks. Such simplicity bias is also shown to be beneficial for generalization. Empirical evidence is also provided.
Strengths: This paper provides a novel perspective for characterizing the implicit bias of SGD. The empirical and theoretical analyses in the current paper seem interesting and insightful.
Weaknesses: 1. The proof of the stochastic attractivity seems to only apply to very simple networks. For sign invariance, Theorem 5.1 applies to a scalar single neuron model; for permutation invariance, only a two-neuron network is analyzed.
2. Similarly, analysis of the training dynamics only applies to two-layer linear networks. These seem insufficient for justifying what is really happening during actual training of deep neural networks.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. The invariant sets for SGD are also invariant sets for GD. Is there evidence that GD doesn't exhibit collapse?
2. Is there any interplay between different types of invariant sets? E.g., which type of invariance does SGD prefer?
3. Is it possible to characterize an invariance type by a corresponding norm of the parameters? If not, why?
4. In general, how to verify the sufficient conditions in Theorem 4.1? Is it possible to verify them empirically for those models used in the experiments in the current paper?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your effort on the insightful review and suggestions. We are going to discuss each weaknesses and questions raised:
**Weakness 1**: We value your insight but respectfully disagree. In Section 4 of our paper, Theorem 4.1 yields sufficient conditions for stochastic collapse applicable to **any** affine invariant set of **any** dimension in **any** network. We acknowledge that stronger statements in Sec. 5 & 6 are restricted to simpler settings, partially to provide concrete/intuitive examples to illustrate our general theory. Deriving sufficient and necessary conditions for stochastic collapse for all neural network architectures lies beyond present theory, as we explain in our revised section 4:
“_To extend the collapsing condition derived in Theorem 4.2 to high-dimensional cases, a natural idea would be to consider all one-dimensional slices of parameter space orthogonal to the invariant set. However, the issue is that some of these slices might satisfy the collapsing condition while others do not. This can result in complex dynamics near the boundary of the invariant set making it difficult to derive a necessary and sufficient condition for attractivity in high-dimensions. Nonetheless, we can derive a sufficient condition._”
**Weakness 2**: We would like to clarify the goal of Sec. 6. It is a fundamental challenge to derive analytic learning dynamics for general deep nets. Sec. 6 provides an instructive setting for understanding generalization in deep learning dynamics where we could derive exact results on how stochastic collapse impacts generalization. We used this understanding to derive **new** testable predictions about the role of stochastic collapse in non-linear models. As explained in the paragraph following the introduction of the linear-teacher student analysis (lines 312 - 319):
“_The key prediction is that a large learning rate induces stronger stochastic collapse, thereby regularizing the model complexity. Furthermore, remaining in a phase of larger learning rates for a prolonged period drives SGD closer to the invariant set. Consequently, when the learning rate is eventually dropped, overfitting in these specific directions is mitigated._”
In the subsequent paragraph, we proceeded to “_test this predicted mechanism_” on a “_VGG-16 and ResNet-16 [trained] on CIFAR-10 and CIFAR-100, respectively_”. And as demonstrated in figure 4, “_we found, as predicted, that models with a longer initial training phase collapse towards simpler invariant sets with fewer independent neurons_”.
This combination of deriving a theoretical prediction from analytic insights into a simple model, and then testing this prediction in more complex models, that are currently beyond the reach of theory, is a powerful route forward for advancing our conceptual understanding of the complex processes of deep learning; it combines theory and experiment in creative/synergistic ways, obtaining results that neither theory nor experiment alone could achieve. Therefore this connection between the linear student-teacher setting and more complex models should, we believe, be considered a strength of our work rather than a weakness. We never would have tested a connection between duration of large learning rate training and degree of collapse in Fig. 4 without our theory. We also would never have tested for strong stochastic collapse to permutation invariant sets in Fig. 1 & 2 without our theory.
**Question 1**: While invariant sets of SGD are also invariant sets of GD, such sets can be attractive for SGD but **not** for GD. Our surprising result is that if the diffusion of SGD is strong enough, SGD can be attracted to invariant sets that **repel** GD (examples: local maximum of double well potential in Sec. 4, saddle points with zero singular values in linear-student teacher setting in Sec. 6). Theorem 4.1 & 4.2 shows precisely how diffusion is essential for collapse of SGD to maxima/saddles; GD cannot similarly collapse because diffusion is absent; GD can only collapse to minima.
**Question 2**: Here are factors that we believe influence SGD’s likelihood of collapse:
- Attractivity Rate: SGD is more inclined towards invariant sets with a higher attractivity rate, a key quantity defined in Theorem 4.2.
- Quantity of Invariant Sets: SGD favors more numerous invariant sets; e.g. the number of permutation invariant sets within each layer of $n$ neurons is $O(n^2)$ while the number of sign invariant sets is only $O(n)$. This suggests collapse to permutation invariant sets is more common (see Fig. 1).
**Question 3**: We did characterize invariant types by a norm of the parameters:
- Sign Invariant Set: For a given neuron, its sign invariant set is defined by the vanishing norm of both its incoming and outgoing weight vectors (lines 235 - 236).
- Permutation Invariant Set: For two neurons, their permutation invariant set is determined by the vanishing norm of the difference between their incoming and outgoing weight vectors (lines 227 - 229 and lines 236 - 239).
**Question 4**: Verifying the sufficient condition of theorem 4.1 in deep nets is technically hard because: (1) we must check equation (7) holds for any unit normal vector and every point $\theta$ in the invariant set; (2) we must compute the full Hessian matrix and second derivative of the diffusion matrix, within the entire invariant set, which is expensive. Although we cannot directly verify theorem 4.1 for modern neural nets, we have verified it in simpler examples. We now include additional figures (see Figure S1 and S2 in the attached PDF) for the verification of Theorem 4.2 and 5.1, which thus also verifies Theorem 4.1. But **most importantly** to circumvent these difficulties, we **directly** demonstrate stochastic collapse in realistic settings of ResNets and VGG (Fig. 1,2,4); this **direct** empirics constitutes a powerful demonstration of the prevalence of stochastic collapse in practice.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I thank the authors for the detailed response. I don't have further questions, and I'm willing to raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your consideration in revising the score! Please let us know if any additional questions come up. | Summary: The paper explores how stochastic gradient descent (SGD), a common optimization method for deep neural networks, can lead to simpler and more generalizable models. The paper introduces the concept of invariant sets, which are regions of the parameter space that are unaffected by SGD, and shows how SGD can be attracted to these sets under certain conditions. The paper also analyzes how this attraction can result in removing or reducing redundant neurons in the network, and how this can improve the performance on unseen data. The paper provides theoretical and empirical evidence for these phenomena, and explains why training with large learning rates for a long time can help SGD find simpler subnetworks.
Strengths: - The paper is well-motivated, focusing on the study of how gradient noise directs SGD dynamics toward simpler subnetworks.
- Several interesting insights are presented, including the relationship between learning rate and stochastic collapse, and the influence of the stochastic collapse on generalization.
Weaknesses: - The paper does not sufficiently clarify the applicability of the attractivity condition to permutation invariant sets in high-dimensional settings. Additionally, it inadequately addresses the extension of the linear teacher-student framework to non-linear models or more general scenarios.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Is it possible to (theoretically) analyze stochastic collapse beyond linear models or one-dimensional settings?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comprehensive review and the time you have taken to suggest improvements for our study. We'd like to respond to the weaknesses and queries you raised regarding our paper:
**Weakness 1**: “_not sufficiently clarifi[ing] the applicability of the attractivity condition to permutation invariant sets in high-dimensional settings._” While we did not provide a direct theoretical treatment of our theorems in section 4 to a high-dimensional scenario of the permutation invariant set (while we did do this for the sign invariant set in the teacher-student analysis in section 6). As we explained on lines 221-223, “_we provide intuitive insights into the attractivity of permutation invariant sets through a toy example of a two-neuron neural network in Appendix D_”. We attempted to analyze this setting theoretically with high-dimensional input/output, but found it very difficult to derive statements stronger than what the main theorem statements in section 4 already implied. Note that our main Theorem 4.1 provides a sufficient condition for stochastic attractivity to **any** affine invariant set, **including** permutation invariant sets. To further bolster this result, we devoted the majority of section 5 in our paper to empirically verifying and exploring the applicability of our attractivity condition to permutation invariant sets in high-dimensional neural network settings. We found, remarkably, strong stochastic collapse to permutation invariant sets in which many neurons have identical incoming and outgoing weights (Fig. 1) in realistic networks (ResNets and VGG-16). We believe our combination of theory and empirics provides strong applicability of the phenomenon of attraction to permutation invariant sets.
**Weakness 2**: “_extension of the linear teacher-student framework to non-linear models or more general scenarios._” We appreciate your comment and would like to clarify how we extended the linear teacher-student framework to the more realistic neural network settings. Directly extending our theoretical analysis to more complex non-linear models is exceptionally challenging, demanding the introduction of novel analysis techniques, and would stand as a distinct and independent research endeavor. Thus, the goal of section 6 was to use a well-studied setting for understanding generalization in deep learning dynamics where we could derive exact results highlighting the influence of stochastic collapse on generalization. We used this understanding to derive testable predictions about the role of stochastic collapse in non-linear models. As explained in the paragraph following the introduction of the linear-teacher student analysis (lines 312 - 319):
“_The analyses in the linear teacher student network provide valuable insights into how stochastic collapsing can enhance generalization…The key prediction is that a large learning rate induces stronger stochastic collapse, thereby regularizing the model complexity. Furthermore, remaining in a phase of larger learning rates for a prolonged period drives SGD closer to the invariant set. Consequently, when the learning rate is eventually dropped, overfitting in these specific directions is mitigated._”
In the subsequent paragraph, we proceeded to “_test this predicted mechanism_” on a “_VGG-16 and ResNet-16 [trained] on CIFAR-10 and CIFAR-100, respectively_”. And as demonstrated in figure 4, “_we found, as predicted, that models with a longer initial training phase collapse towards simpler invariant sets with fewer independent neurons_”.
This combination of deriving a theoretical prediction from analytic insights into a simple model, and then testing this prediction in more complex models, that are currently beyond the reach of theory, is a powerful route forward for advancing our conceptual understanding of the complex processes of deep learning; it combines theory and experiment in creative, powerful and synergistic ways, obtaining results that neither theory nor experiment alone could achieve. Therefore this connection between the linear student-teacher setting and more complex models should, we believe, be considered a strength of our work rather than a weakness.
Answer to the question: Yes, our work **does** indeed analyze stochastic collapse in non-linear or high-dimensional setups. Specifically, Section 4 aims to offer a general theorem for stochastic collapse within a broader framework. Following Section 4, we study stochastic collapse in a **non-linear** but single-neuron model (Theorem 5.1), while our teacher-student framework in Section 6 studies stochastic collapse in a **high-dimensional** linear model. We agree that finding a non-linear **and** high-dimensional setting where studying stochastic collapse is theoretically tractable is an important and challenging step for future work. Nonetheless, our current work fills in this theoretical gap by exploring these settings through extensive empirics. We verified empirically the prevalence and importance for generalization of stochastic collapse in realistic networks (ResNets and VGG) (Fig. 1,2, and 4), and demonstrate that stochastic collapse in these realistic settings shows similar properties to those we have found in the simple examples.
---
Rebuttal Comment 1.1:
Comment: Thanks for your helpful responses. The reviewer has no further questions and will keep the positive rating of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply. We are happy to hear you think positively about the paper. Is there an aspect of our work that is holding you back from raising your score? Your comments will help us improve our paper further. Please let us know if you any new questions do come up. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their careful and detailed comments. We here attach a single-page pdf with the following three figures to address reviewers’ questions.
1. Figure S1- Empirics for our illustrative example. This shows that the quantitative prediction via Theorem 4.2 agrees with empirical results.
2. Figure S2 - Empirics for the single neuron model in Section 5. This shows that the quantitative prediction via Theorem 5.1 agrees with empirics results
3. Figure S3 - Empirics for the teacher-student setup in Section 6, where we verify that the key observed phenomena shown in Figure 3 hold even without the assumptions proposed in Section 6 and on SGD minibatch noises.
Please also see our individual responses to each reviewer for more information!
Lastly, it's worth pointing out that a crucial conceptual advance within our paper, though present, hasn't been prominently emphasized. This insightful perspective is as follows: **Anisotropic position dependent diffusion can cause SGD in deep learning to get attracted to higher training error saddle points that nevertheless have lower test error than local minima with lower training error.** This statement is very surprising to many people we have discussed with, yet it is nevertheless true and our paper explains why. For example, in our linear student-teacher setting, fixed points where the student has some of its singular values stochastically collapse to zero (corresponding to small data singular values that are not learned), are actually saddle points in the training error loss landscape, and these saddle points have **higher** training error than the global minimum (in which all the data singular values are learned for a full rank student), but **lower** test error than the global minimum (also, more generally, any network at a learning fixed point with identical neurons typically is at a saddle; lower training error can be achieved by specializing the neurons). This is thus a concrete example of how convergence to a higher training error saddle point can, remarkably, help generalization. We believe this may become an impactful, well-known conceptual advance in deep learning due to our paper. We will expand upon this important loss landscape perspective in the revised camera-ready paper.
We greatly appreciate the time and effort you spent reviewing our paper, which we are certain will make our work better. We hope you will consider our paper an important contribution to the NeurIPS community. We look forward to hearing from you!
Pdf: /pdf/0b82e47692963fbdb6841dd2255edb2de85f7ce1.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors investigate the implicit bias of the SGD algorithm towards invariant sets -- with a focus on sign and permutation invariance. In particular, they derive a sufficient condition for the stochastic attraction of the SDE describing the first two moments of the SGD process towards these invariant sets, a phenomenon that they call "stochastic collapse". Then, they turn to the study of generalization error dynamics in a one-hidden layer teacher-student linear network, showing that large learning rates increase stochastic collapse, that in turn benefits generalization by implicit low-rank regularization. Finally, they test their theoretical predictions on realistic architectures.
Strengths: The paper presents an interesting perspective on the implicit bias of SGD and its beneficial effect on generalization, by bridging the gap between different approaches -- e.g., the interplay of the Hessian and the effective noise, the tuning of the learning rate, the role of sparsity. The results are solid and well presented.
Weaknesses: - The results are obtained only in continuous time
- Assumptions (A1-A4) in Sec. 6 are really restrictive
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - On line 176, the authors comment that stochastic collapse can bias the SGD dynamics towards local maxima of the loss. How can this result in better generalization? A comment in relation to Sec. 6 would be useful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and the time you took to identify areas where our work could be improved. We'd like to address the concerns you raised:
**Weakness A**: We appreciate your observation regarding our use of a continuous formulation of SGD and we agree that this is a limitation of our current analysis. As we discussed at the end of our manuscript, “_extending our analytic results from continuous SGF to the discrete SGD updates_” is an important direction for future work. Our decision to adopt a continuous formulation was driven by our desire to leverage existing concepts from stochastic control theory, such as the definition of stochastic attractivity and the use of tools such as Itô's calculus to derive concise conditions for stochastic attractivity. However, we'd like to underline that our paper's main arguments retain their strength for several reasons:
- Our discussion of the invariance properties of sign-invariant and permutation-invariant sets (refer to Lemma 4.1 in our paper) is not confined to the continuous formulation; these properties remain valid in the original discrete SGD setting as well.
- Our empirical examinations with CNN models utilizes the conventional discrete SGD algorithm instead of a discretization of the continuous SGF variant discussed in our theoretical framework. The consistency of our empirical findings with our theoretical postulates underscores that our continuous assumption is sufficient to explain important features of the outcome of discrete SGD updates at practical finite learning rates typically used.
In our updated manuscript we will add a discussion further elaborating on our choice to use a continuous formulation of SGD and the approach to extend our findings to the discrete scenario.
**Weakness B**: We want to emphasize that assumptions A1, A3, and A4 are standard assumptions used in the canonical papers [Saxe et al. 2014] and [Lampinen & Ganguli 2018], which established the linear-teacher student model as a setting for studying generalization. We added A2 such that we could derive analytical expressions for the theory when incorporating the new ingredient of stochasticity to the analysis. We agree that it is important to check how restrictive these assumptions are and we thank the reviewer for pointing this out. We now include additional empirical results (see Figure S3 in the attached PDF) which verifies that the key observed phenomena hold even without these assumptions. Although we acknowledge that our assumptions in section 6 are restrictive, we believe they are necessary to provide an analytic demonstration of how stochastic collapse influences generalization, though the derived conclusions hold more generally. Indeed remarkably, our theory, despite its restrictions made a **new** prediction that training for longer at larger learning rates helps generalization because it promotes stochastic collapse. We tested this novel prediction successfully in regimes far beyond the restrictions of our theory - in ResNets and VGG’s trained on CIFAR10/100 (see Fig. 4). Thus our simple theory makes powerful, generally applicable predictions.
**Answer to question for line 176**: We appreciate your inquiry. Upon revisiting this sentence, we agree that it is somewhat ambiguous and might give the impression that converging to a local maxima is beneficial. Although there should exist certain pathological instances where such convergence is beneficial, in general this would not be the case. For a high-dimensional setting, such as that considered in section 6, stochastic collapse to saddle points, not local maxima, would be beneficial to generalization. The sentence should read, "_converge to a local maxima or **saddle-point** of the loss landscape_". For example, in our linear student-teacher setting, fixed points where the student has some of its singular values stochastically collapse to zero (corresponding to small data singular values that are not learned), are actually saddle points in the training error loss landscape, and these saddle points have **higher** training error than the global minimum (in which all the data singular values are learned for a full rank student), but **lower** test error than the global minimum. This is thus a concrete example of how convergence to a higher training error saddle point can, remarkably, help generalization. We will expand upon this important loss landscape perspective in the revised camera-ready paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their careful responses. My concerns have been adequately address and I remain of the opinion that this work is interesting and worth publication. Therefore, I will keep my score of 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued support. Let us know if additional questions come up. | null | null | null | null | null | null |
Efficient Bayesian Computational Imaging with a Surrogate Score-Based Prior | Reject | Summary: The paper deals with variational inference (VI) of the posterior distribution when using a diffusion based generative model as the prior distribution. The paper proposes using a lower bound on the log probability of the prior distribution (instead of calculating it through the ODE in standard diffusion based generative models as in [1]) for the optimization of the Kullback-Leibler (KL) divergence between in the VI framework. This results in a more efficient training procedure for the VI than using the true log probability as well as a more efficient memory footprint.
The authors then validate their method using an MRI dataset as well as the most common CelebA and Cifar datasets.
[1] Berthy T Feng, Jamie Smith, Michael Rubinstein, Huiwen Chang, Katherine L Bouman, and William T318
Freeman. Score-based diffusion models as principled priors for inverse imaging. arXiv preprint319
arXiv:2304.11751, 2023
Strengths: * The paper proposes a solution to estimating the KL that is indeed way faster than the previous known methods and this allows for **effective gains in training time and memory footprints**. They provide some visual evidence that those gains do not alter considerably the quality of the variational approximation of the posterior distribution.
* The presentation of the problem is good and they motivate the need for an efficient algorithm with a real word application (Accelerated MRI).
* The explanation of the method is clear and reproducible. I expect practitioners would be able to clearly implement the algorithm with the description given in the main part of the text.
Weaknesses: * The paper for me gives a slightly overstated presentation of the variational inference as a sampling from a true Bayesian posterior. The Bayesian posterior is clearly defined once one states that the prior is the diffusion based generative model. Therefore, when doing Variational Inference, we are only approximating (to an unknown degree) this posterior distribution, so I no way the samples generated by the outcome distribution of the VI procedure are not an approximation of the Bayesian problem. Of course, VI is still a useful approach, but I would say that I agree that the proposed algorithm is closer to the "true Bayesian inference" (line 42) than other such as DPS [2] or SMC-DIFF [3].
* The numerical evaluation and comparison with other algorithms is insufficient.
1) **Evaluating the distance to the true posterior**: The numeric compare only visually the posterior with the posterior obtained from [1]. When comparing with [2], the paper compare SSIM and PSNR as well as visual assessment of the reconstructions. As stated in line 258 of the paper, for ill posed inverse problems comparison to the "true image" can not be considered an adequate metric. I'd suggest considering an example where the posterior is analytically available (for example, when considering a Gaussian likelihood with a diffusion model over a mixture of Gaussians). In such case, several metrics can be used to compare the different methods ([1], [2]) such as the sliced wasserstein or even the KL.
2) **Complexity**: When comparing with other methods such as [2], we should keep in mind that the proposed method needs an optimization problem for each measurement $y$ (minimization of the KL). This is not the case for some of the "posterior diffusion samplers" such as [2] and [3]. Therefore, the actual computing time needed once we receive a measurement $y$ is smaller for [2] and [3] quite considerably.
I'd be inclined to augment my grade if the quantification of the error to the true posterior is better understood, specially in comparison with [1] and [2].
[2] Hyungjin Chung, Jeongsol Kim, Michael Thompson Mccann, Marc Louis Klasky, & Jong Chul Ye (2023). Diffusion Posterior Sampling for General Noisy Inverse Problems. In The Eleventh International Conference on Learning Representations .
[3] Brian L. Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, & Tommi S. Jaakkola (2023). Diffusion Probabilistic Modeling of Protein Backbones in 3D for the motif-scaffolding problem. In The Eleventh International Conference on Learning Representations .
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. I do not understand figure 3(a). How can the prior be Gaussian and at the same time come from a diffusion generative model on CelebA? I understand that somehow the score was trained to sample from a Gaussian (if the target distribution is a Gaussian the score is analytically available), but it's not clear how. If the posterior is Gaussian, why not using a Gaussian family for VI instead of RealNVP? Also, if everything is Gaussian, one could in principle calculate several metrics between each algorithm and the true distribution, which would be a non subjective metric for the posterior.
2. Figure 4 suggests that the minimum found by both the proposed algorithm and [1] are quite different. What is the KL gap between the two?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. Please refer to the global rebuttal for discussions on computational cost and closeness to the true posterior. Below we address specific questions.
***
Q: Evaluating the distance to the posterior.
A: Thank you for your suggestion to quantitatively assess our method with a mixture-of-Gaussians prior. We will add such an experiment to the main or supplementary material.
***
Q: Figure 3(a). How can the prior be Gaussian and at the same time come from a diffusion generative model on CelebA?
A: In that experiment, the diffusion model was trained on a Gaussian approximation of CelebA. Specifically, we computed the empirical mean and covariance of CelebA training images and trained the diffusion model on samples from a Gaussian distribution with the same mean and covariance. This allows us to analytically derive the Gaussian posterior from the known Gaussian likelihood and Gaussian prior. We will clarify that experiment in the text.
***
Q: Figure 4. What is the KL gap between the two?
A: Let $q_{\phi_1}$ and $q_{\phi_2}$ be the variational posteriors optimized with the surrogate score-based prior and exact score-based prior, respectively, in Fig. 4. $\phi_1$ and $\phi_2$ are each the parameters of a RealNVP. The reverse KL divergence is $$\text{KL}(q_{\phi_2}\lVert q_{\phi_1})=E_{\mathbf{x}\sim q_{\phi_2}}\left[\log q_{\phi_2}(\mathbf{x})-\log q_{\phi_1}(\mathbf{x})\right]\approx 3011.2$$ (approximated using 10240 samples from the exact variational posterior). The forward KL divergence is $$\text{KL}(q_{\phi_1}\lVert q_{\phi_2})=E_{\mathbf{x}\sim q_{\phi_1}}\left[\log q_{\phi_1}(\mathbf{x})-\log q_{\phi_2}(\mathbf{x})\right]\approx 277739.5$$ (approximated using 10240 samples from the surrogate variational posterior with two outliers removed). Please note that while RealNVP normalizing flows provide tractable log-probabilities, their accuracy on out-of-distribution images is not well-proven and may affect the accuracy of KL estimates. | Summary: This paper focuses on solving inverse problems using diffusion based probabilistic models. The approach considered consists in minimizing the KL divergence between a variational posterior and the true posterior of the diffusion model. Computing this KL involves approximating the log probability of the diffusion model's marginal (which is assumed to approximate the true data generating distribution). A previous paper [1] used the very expensive ODE approach to approximate this log probability, making the whole method prohibitively expensive and inefficient when factoring in the computational cost. The present paper suggests instead minimizing an upper bound on the KL divergence, using a lower bound on the log probability of interest that was derived in [2].
[1] *Feng, B.T., Smith, J., Rubinstein, M., Chang, H., Bouman, K.L. and Freeman, W.T., 2023. Score-Based diffusion models as principled priors for inverse imaging. arXiv preprint arXiv:2304.11751.*
[2] *Song, Yang, Conor Durkan, Iain Murray, and Stefano Ermon. "Maximum likelihood training of score-based diffusion models." Advances in Neural Information Processing Systems 34 (2021): 1415-1428.*
Strengths: The proposed method is more efficient than that proposed in [1]. It results in a drastic reduction of computational time and memory cost while maintaining having very similar performance.
Weaknesses: - I believe that the KL approach is sound and reliable since the resulting variational approximation will likely not sample outside the support of the posterior. It is however extremely costly in terms of computational time and memory cost. It takes 9 hours to obtain a variational approximation over a **single image** of dimension 2^16. On the other hand, classical posterior sampling methods such as DPS [3] take less than one minute for larger images. One might then argue that such methods do not target the exact posterior and it is true! However, the approach in the present paper is also not guaranteed to sample the correct posterior; the forward KL suffers from mode collapse and posteriors over high dimensional images are highly multimodal. The only advantage of this approach is that it will likely not "hallucinate" but I do not think that it is in anyway competitive with other existing methods when one factors in the computational cost.
- The authors should have at least illustrated their method on simple toy examples where the posterior is available and multimodal, so that we can see if it indeed recovers the posterior completely.
[3] Chung, Hyungjin, Jeongsol Kim, Michael T. Mccann, Marc L. Klasky, and Jong Chul Ye. "Diffusion posterior sampling for general noisy inverse problems." ICLR (2023)
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Figure 2, page 7. How come the present method scales better with image size? Shouldn't it be the opposite?
- The main reason for the huge computational cost is the fact this method requires differentiating the score network with respect to its input (by the chain rule). This is because the authors seek an approximation of the log probability. However, it seems to me that what is actually need is $\nabla_x \log p_{\theta}(x)$ (again by the chain rule). While this score is not available in practice, can't it be approximated by taking for example the score at a time $t$ close to $0$?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Please refer to the global rebuttal for discussions on computational cost and closeness to the true posterior. Below we address specific questions.
***
Q: Illustrate on toy examples where the posterior is available and multimodal.
A: Thank you for your suggestion. We will add an experiment with a mixture-of-Gaussians posterior (in addition to the Gaussian posterior shown in Fig. 3(a)) to the main or supplementary material.
***
Q: Figure 2, page 7. How come the present method scales better with image size?
A: The plots show optimization time and memory (lower is better). As the plots show, computational cost increases with image size. We do find that the efficiency gap between our method and [Feng 2023] widens as the image size increases.
***
Q: While this score is not available in practice, can't it be approximated by taking for example the score at a time t close to 0?
A: Fig. 5 in [Feng 2023] addresses exactly this question. Approximating $\nabla_\mathbf{x}\log p_\theta(\mathbf{x})$ with the score-model output at time $t$ close to 0 leads to an incorrect posterior. One hypothesis for this behavior is that the score-model neural-network does not generalize to out-of-distribution images and is only accurate for images with similar noise levels as the ones it saw during training.
***
References:
[Feng 2023] B.T. Feng, J. Smith, M. Rubinstein, H. Chang, K.L. Bouman, and W.T. Freeman. Score-based diffusion models as principled priors for inverse imaging. ICCV, 2023. | Summary: Authors propose a non-amortized variational inference approach to solve large-scale Bayesian inference problems where the prior is based on an cheap-to-evaluate approximation to a pretrained diffusion model.
Strengths:
**Originality.** This paper hits the nail on the head when it comes to large-scale Bayesian inference in the context of inverse problems, especially when diffusion models are used as priors.
**Quality and clarity.** The paper is well-written and easy to follow. The authors have done a great job in explaining the proposed approach and the experiments are well-designed to demonstrate the effectiveness of the approach.
**Significance.** This approach allows to leverage the power of diffusion models in approximating complex distributions in solving large-scale Bayesian inference problems. This is a significant contribution to the field and I am interested to apply this approach to another inverse problem domain.
Weaknesses: * My primary concern revolves around the justification for selecting diffusion models as prior distributions over alternative generative models. It would greatly enhance the paper to thoroughly examine the advantages and disadvantages of diffusion models compared to other generative models, specifically within the framework of large-scale Bayesian inference. It would be ideal to include a comparison with amortized normalizing flows and/or injective flows. This raises the question: why not initially employ a normalizing flow to learn the the prior or full posterior distribution (amortized VI)?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: * I would like to know the authors' stance on amortized vs. non-amortized variational inference. What would be required to amortize their objective function, enabling the use of $q_{\phi}$ to approximate the posterior distribution for any new observation after training? This approach has the potential to make training costs offline, enabling fast posterior sampling during test time.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: * It would be beneficial to provide additional comments on the limitations that arise when dealing with "out-of-distribution" data (unknown being out of diffusion model distribution).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging and thoughtful feedback. Below we address specific questions.
***
Q: Diffusion models vs. other generative models as priors for large-scale Bayesian inference?
A: We initially investigated discrete normalizing flows (NFs) and found they performed poorly as image priors. To quantify the discrepancy between an NF prior and a score-based prior, we conducted the same experiment in Fig. 3(b)(i) with an NF prior. The task was to denoise a CelebA test image (noise std. dev. = 0.2 = 20% of the image dynamic range). For the NF prior, we trained a RealNVP on the same CelebA training data that the score-based prior had been trained on. We then followed our variational-inference approach to approximate the posterior with a (separate) RealNVP. Below are the average (+/- std. dev.) PSNR and SSIM across 10240 estimated posterior samples under each prior.
*Score-based diffusion model (exact log-probability as proposed in [Feng 2023]).*
PSNR: 24.31 +/- 0.2
SSIM: 0.88 +/- 0.01
*Score-based diffusion model (our proposed surrogate).*
PSNR: 22.07 +/- 0.3
SSIM: 0.87 +/- 0.01
*Normalizing flow.*
PSNR: 11.66 +/- 0.2
SSIM: 0.32 +/- 0.02
These results suggest that a diffusion model is a more-effective image prior than a normalizing flow, even when using the efficient surrogate that we propose.
***
Q: Non-amortized vs. amortized variational inference?
A: Thank you for your insightful suggestion to investigate amortized variational inference. As you noted, amortized variational inference could adapt the estimated posterior to new measurements more efficiently. This is a promising direction for future work, especially when the imaging task involves many measurements with similar structure (such as in video reconstruction).
***
Q: Out-of-distribution measurements?
A: Robustness to mismatched priors is an advantage of score-based priors. In [Feng 2023], Figs. 8 and 9 demonstrate that exact score-based priors are more robust to out-of-distribution measurements than baseline methods (including [Chung&Kim 2023]). Since baselines sample with the trained diffusion model, they are heavily biased to sample from the prior. When the measurement weight is low, they hallucinate features from the prior; when the measurement weight is high, they introduce unrealistic artifacts. In contrast, score-based priors automatically find a stable posterior even when the source image is far from the prior.
We have empirically found that surrogate score-based priors offer similar robustness to out-of-distribution measurements. We would be happy to add an experiment demonstrating this in our revised manuscript or supplementary material.
***
References:
[Feng 2023] B.T. Feng, J. Smith, M. Rubinstein, H. Chang, K.L. Bouman, and W.T. Freeman. Score-based diffusion models as principled priors for inverse imaging. ICCV, 2023.
[Chung&Kim 2023] H. Chung, J. Kim, M.T. Mccann, M.L. Klasky, and JC Ye. Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR, 2023.
[EHT 2019] The EHT Collaboration et al. First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. ApJL, 2019.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their response.
While going over the provided comments from other reviews, I have noticed a recurring concern regarding the computational cost linked with the non-amortized variational inference (VI) scheme. I can empathize with these concerns and believe that applying this approach to high-dimensional problems with forward operators that are expensive to evaluate could potentially lead to challenges in terms of scalability.
However, I also acknowledge the authors' valid point about achieving reliable posterior approximations in scientific computing applications by explicitly integrating the forward operator into the inference scheme. Considering that existing preconditioned non-amortized variational inference techniques [1, 2]—readily applicable to the proposed method—offer a significant reduction in the computational costs associated with non-amortized variational inference while still providing dependable posterior estimates, I am less apprehensive about the incurred costs and would like to maintain the same score.
I hope that other reviewers recognize that this technique has been devised while keeping in mind the intricacies of scientific computing applications, and furthermore, there are methods that enable scalable non-amortized variational inference.
[1] A. Siahkoohi, G. Rizzuti, M. Louboutin, P. Witte, and F. J. Herrmann, “Preconditioned training of normalizing flows for variational inference in inverse problems,” in 3rd Symposium on Advances in Approximate Bayesian Inference, Jan. 2021.
[2] A. Siahkoohi, G. Rizzuti, R. Orozco, and F. J. Herrmann, “Reliable amortized variational inference with physics-based latent distribution correction,” Geophysics, vol. 88, no. 3, R297–R322, Jan. 2023. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their encouraging feedback and for recognizing our “significant contribution to the field” (huK6): enabling large-scale Bayesian inference with a diffusion-model prior.
**Key contribution.** Our aim is to make score-based priors computationally feasible for inference of large images. As noted by all the reviewers, we achieve this goal with “effective gains in training time and memory footprints” (sVNv) and “a drastic reduction of computational time and memory cost while maintaining … very similar performance” (iehD) to the approach in [Feng 2023].
The approach of turning a diffusion model into a standalone prior brings benefits that are not offered by the diffusion-model-based sampling methods referenced by the reviewers ([Chung&Kim 2023] and [Trippe&Yim 2023]). These include:
* **Hyperparameter-free inference.** Unlike [Chung&Kim 2023] and similar methods, there are no measurement weights since we strictly follow the log-posterior formula.
* **Flexibility.** Users have freedom in choosing the optimization algorithm. In other methods like [Chung&Kim 2023] and [Trippe&Yim 2023], the sampling method is fixed. Although we demonstrate a variational-inference approach, our proposed surrogate score-based prior can be plugged into any optimization objective function that requires a differentiable log-probability.
[Feng 2023] has recently been accepted to ICCV based on these merits despite its computational limitations. Our work makes it possible to realize these same advantages with far-greater computational efficiency. For example, inference of an image with 64x64 pixels took over 5 days with the exact approach ([Feng 2023]) but just 45 minutes with ours (Fig. 2 in the manuscript).
**Key application areas.** Our approach is particularly useful for scientific and medical applications, where it is critical to understand the uncertainty in recovered features. While iehD noted that our method is less likely to hallucinate than [Chung&Kim 2023], [Jalal&Arvinte 2021], and [Song 2022], we disagree with their assumption that this is not worth the computational cost -- strong hallucinations unsupported by the measurements should not be tolerated in scientific analysis.
As Reviewers iehD and sVNv observe, the variational-inference approach we take requires fitting a distinct posterior for each set of measurements. However, it is common in scientific applications to be concerned with a single set of measurements and spend significant time on recovering the best-possible image from those measurements. One example is black-hole imaging, where the difficulty of obtaining measurements warrants a careful approach to image reconstruction.
In fact, since the submission, we have applied our method to black-hole imaging using the Event Horizon Telescope data published in [EHT 2019]. The efficiency of the proposed method made it possible to quickly iterate on our findings with this data. Our recent experience confirms the practical usefulness of the surrogate score-based prior, and we note that it is always possible to refine results with the slower, yet more exact, approach of [Feng 2023].
**Exactness of the posterior (iehD, sVNv).** Although our motivation to perform Bayesian inference leads us to minimize the upper-bound of a variational loss, we do not claim to sample from the exact posterior and will add language to the text to clarify this. Note that exact posterior sampling is rare, especially for high-dimensional images. Variational inference is itself an approximation of exact Bayesian inference, yet this technique has proven influential in many fields.
References:
[Feng 2023] B.T. Feng, J. Smith, M. Rubinstein, H. Chang, K.L. Bouman, and W.T. Freeman. Score-based diffusion models as principled priors for inverse imaging. ICCV, 2023.
[Chung&Kim 2023] H. Chung, J. Kim, M.T. Mccann, M.L. Klasky, and JC Ye. Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR, 2023.
[Trippe&Yim 2023] B.L. Trippe, J. Yim, D. Tischer, D. Baker, T. Broderick, R. Barzilay, and T.S. Jaakkola. Diffusion Probabilistic Modeling of Protein Backbones in 3D for the motif-scaffolding problem. ICLR, 2023.
[Jalal&Arvinte 2021] A. Jalal, M. Arvinte, G. Daras, E. Price, A.G. Dimakis, and J.I. Tamir. Robust compressed sensing MRI with deep generative priors. NeurIPS, 2021.
[Song 2022] Y. Song, L. Shen, L. Xing, and S. Ermon. Solving inverse problems in medical imaging with score-based generative models. ICLR, 2022.
[EHT 2019] The EHT Collaboration et al. First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole. ApJL, 2019. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
MultiMoDN—Multimodal, Multi-Task, Interpretable Modular Networks | Accept (poster) | Summary: This paper presents MultiModN, a modular network that can deal with multimodal multitask problems and is inherently interpretable. MultiModN architecture consists of one encoder module for each modality, and one decoder module for each task. Each encoder module takes in a previous state and one modality input, and outputs the next state; the final state is obtained by sequentially passing the state through the encoder for each available modality (therefore this model can deal with data points with missing modalities as well). The authors conducted experiments on 10 tasks across 3 real-world domains with vastly different modalities. MultiModN was compared to P-fusion, and both models used the exact same preprocessing steps and feature extraction to ensure fairness. The experiments showed that MultiModN is able to achieve similar performance compared to P-fusion while being inherently interpretable. Moreover, the authors conducted additional experiments that intentionally correlates missing modalities and certain labels in the training set, and shows that MultiModN is less affected by the correlation than P-Fusion (i.e. more robust against MNAR).
Strengths: 1. This paper studies the important problem of building robust and interpretable models for solving multimodal real-world tasks. Although the new model did not improve performance, improving model interpretability and robustness to spurious correlations is also extremely important for real-world settings where mistakes can have real impacts.
2. The experiments are done in a very well controlled setting. The authors purposefully used the exact same module architectures and feature extractors as the baseline to ensure fair comparison, and confidence intervals are always included. The experiments convincingly showed that the model is indeed inherently interpretable, and that the model is robust to MNAR compared to P-fusion and does not perform worse than P-fusion overall.
3. The paper presentation is generally quite good. The methodology description is clear and detailed, and the experiment details are all included. The figures and tables are nice looking and easy to understand.
Weaknesses: The experiments are restricted to the exact same settings and pre-processing steps as P-fusion. While this ensures the fairness of comparison, it also means that we don't have evidence on whether this modular approach will also be able to achieve similar performance or alleviate MNAR problem if we have a different baseline approach / pre-processing. Perhaps a small proof-of-concept experiment showing that this approach can also work under different settings will greatly improve the significance and impact of this work.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Q1. Since all encoders are dense-layers (I assume that means MLPs), is it correct that the inputs to the encoders are the features extracted from each modality, which is always a single vector?
Q2. The only part of the paper that I am completely lost on is line 149-151 "Extension to time series". I understand that the M encoders are applied sequentially, but what does that have to do with time series? For time-series modality, doesn't the feature extractor turn it into a single vector such that it can go through the dense layers of the encoders (as in Q1)? I also don't see how this "extension" is relevant to any other part of the paper.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
I believe there is no foreseeable potential negative social impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **SUMMARY:**
We kindly thank the reviewer for this careful evaluation of our work and greatly appreciate the positive review. We strongly agree with all points raised.
We have addressed each in a point-by-point response below and feel that the resulting edits have further improved the manuscript.
**STRENGTHS:**
We are happy that the reviewer feels the paper addresses the important problem of multimodality in real-world settings. We are additionally encouraged that the reviewer believes the experiments are conducted in a “well-controlled setting” and that the paper presentation is strong with “clear and detailed descriptions”. We are immensely grateful that reviewer 3 recognizes the contributions of the paper as we intended!
**WEAKNESSES:**
The reviewer rightly explains that our **intentional alignment of architectures (to ensure fair comparison) purposely limits our ability to make a performance comparison to the baseline**, and views this as a weakness.
To provide insight into performance gains, we performed additional experiments as suggested to showcase the benefits of modularity with vastly different training and inference settings. The results of 30 new experiments of inference encoders, each performed with 5-fold cross-validation are included in Figure 1 of the rebuttal. We compare P-Fusion and our approach on both tasks of the MIMIC dataset using all possible combinations of four inputs modalities at test time. MultiModN ignores missing modalities whereas P-fusion imputes and therefore encodes missing modalities.
We note that the performance at inference for P-Fusion and MultiModN has no significant differences for all experiments (using 95% CI). Figure 1 shows that, on average, P-Fusion tends to overfit more to the most dominant (visual) modality. When this modality is missing (at random or completely at random), Multi-ModN performs better on a combination of the remaining modalities (demo, text, time series). In the case of missing modalities, the observed effect in Figure 1 is weak – confidence intervals overlap. Considering the MNAR (missing not at random) scenario described in Experiment 3 of the paper (Section 6.3), the difference becomes significant.
**QUESTIONS:**
The reviewer had 2 questions:
**Regarding the inputs of encoders**: yes in our case, the input is always a single vector and the encoders are always MLP. For the MIMIC dataset that contains images as an input modality, we use the pre-trained CNN to preprocess the data, which results in a vector. But nothing in the design of MultiModN restricts us to the use of MLP encoders that expects a 1D vector as an input. The pre-trained CNN could be included in the encoder as a frozen part, so that the encoder would receive images as input without any changes in the operations applied on the input. Any suitable architecture can be used as an encoder. We chose the presented simplified architecture to demonstrate the approach. We agree that this point needs to be clarified and made explicit. We will state explicitly that in our experiments, all encoders are MLP that expect 1D vectors as input.
The second question of the reviewer **discusses the role and relevance of the "extension to time series” section** in the architecture. We agree that the description of the time series setting of the MultiModN architecture (relevant in EDU and Weather) can be better described. As the reviewer mentioned correctly, in the MIMIC setting, we show adding the time-series directly as a modality, which embeds the data as a single vector.
However, in real-world settings with data streams, real-time data is collected continuously over weeks and months at the same time the model is expected to inference. For example, the education version of MultiModN can make a prediction of student pass-fail success at week 1, week 2, week 3, etc. with new data from each week. MultiModN allows us to simply add the new week’s data (i.e. week 3) and update the predictions on all tasks (without requiring the model to inference again on week 1 or week 2 predictions). As described in the appendix, the “continuous” classification or regression tasks (Tasks 4 through 8) predict at every timestep with the new data from that timestep (each hour for the temperature forecasting and each week for the student success or dropout prediction).
**LIMITATIONS:**
We are happy that the reviewer feels we have appropriately addressed all limitations in the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications and the additional studies regarding applying your method to different settings! I raised my score from 6 to 7. | Summary: This paper describes an architecture for multimodal multitask learning which is robust to missing modalities/tasks both at training and test time. This is composed of:
- encoder specific modules
- (assuming an ordering among modalities) a hidden states which depends on the output of a modality specific encoder, and which feds into the next encoder,
- a decoder that takes the state and produced an output for each task.
The method is tested on three multimodal datasets, namely MIMIC, EDU and Weather2K.
Strengths: - multimodal multitask learning is a very interesting topic which is very relevant to this venue
- the idea to make a multimodal multitask system robust to missing modalities is clearly good
- the overall motivation and intuition behind the proposed architecture is sensible and intuitive
Weaknesses: - The major limitation is the lack of reference and comparison to other multimodal multitask approaches, particularly those based on transformers in the vision/language domain. For instance,
[1] UniT: Multimodal Multitask Learning with a Unified Transformer
Ronghang Hu Amanpreet Singh CVPR 2021
[2] Are Multimodal Transformers Robust to Missing Modality?
Mengmeng Ma, Jian Ren, Long Zhao, Davide Testuggine, Xi Peng CVPR 2022
The submission would be much stronger if the authors would discuss their contribution relative to these works, and possibly compare empirically their architecture to these ones using the same benchmarks
- Related to the point above, there should be at least discussion about the recent trend to turn every task to text generation using a large language model (LLM), and multimodal multitask system leveraging LLMs. For instance:
[3] Flamingo: a Visual Language Model for Few-Shot Learning
Jean-Baptiste Alayrac, Jeff Donahue et al. NeurIPS 2022
- Related to transformers, the choice of using a RNN seems rather unnatural because it enforces an artificial ordering over modalities. Instead transformers operate on sets (of any size). I wonder whether the authors have considered replacing the RNN with a transformer.
- The paper lacks clarity. The method should be described in a more formal way to better understand the implementation details. For instance, I am unclear:
- whether decoder parameters are shared across the different modalities,
- how the decoders predictions are combined across modalities/modules,
- what parameters are actually subject to training,
- what happens in the encoder when a modality is missing
- The empirical results are weak. In fact, the proposed method often times works worse than the simple fusion baseline.
- It would be nice if the authors demonstrated the benefits of modularity, for instance by:
- adding a new task or modality over time
- improving a task module with a subset of modalities at training time, the task improves also when at test time other modalities are used, etc.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See comments above in the weakness section.
The major question is about 1) relation to prior work, particularly in the vision/language domain, 2) use of more standard benchmarks (in addition to the chosen ones), 3) use of RNN as opposed to transformer, 4) various clarifications of the approach.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **SUMMARY:**
We kindly thank the reviewer for the careful evaluation of our work and greatly appreciate the review. We address each concern in the point-by-point response below and feel that the resulting edits have further improved the manuscript.
**STRENGTHS:**
We are happy that the reviewer feels the paper covers an interesting topic that is very relevant to NeurIPS. We are also glad that the reviewer agrees on the importance of making multimodal models resilient to the common issue of systematic missingness and that they find our approach sensible and intuitive.
**WEAKNESSES:**
We agree with the reviewer’s opinion that our submission would be much stronger if compared to **further multimodal multitask baselines, particularly transformers in the vision/language domain**. We had actually tested several baselines (e.g., BiLSTMs, various compositions of P-Fusion), none of which improved upon MultiMoDN. For readability, they were excluded. We now include a transformer baseline selected with hyperparameter tuning on all 3 datasets. The results are found in Table 1 of the PDF and will be included in the final submission.
The results indicate that MultiModN outperforms or at least matches the transformer benchmark in the vast majority of single and multitask settings, and comes with several interpretability, missingness, and modularity advantages. Specifically, using the primary metric for each task (BAC for classification and MSE for regression tasks), MultiModN beats the transformer baseline significantly in 7 tasks, overlaps 95% CIs in 11 tasks, and loses very slightly (by 0.01) in 2 regression tasks.
We agree with the reviewer that a **larger discussion on the recent multimodal trends using transformers and LLMs** would improve the paper. UNiT is a promising multimodal, multitask transformer architecture, however, it remains monolithic, trained on the union of all inputs (padded when missing) inputted into the model in parallel. This risks exposing the model to systematic missingness during training and reduces model interpretability (requiring all modalities to be represented even if not present) and portability (the transformer has 427,421 trainable params for EDU while MultiModN achieves better performance with 12,159). [1]’s recent work has found similar results on the erratic behavior of transformers to missing modalities, but is only tested on visual/text inputs.
The recommendation of the LLM approach Flamingo is interesting but is also limited to only 2 modalities (visual/text). It is not clear how tabular and time series would be handled or how this would affect the context window at inference. Combining predictive tasks with LLMs will also greatly impact interpretability, introducing hallucinations and creating a model complexity that may use learned text bias to influence predictions.
**The reviewer wonders whether transformers are superior as they don’t impose an order which may be “unnatural”.** MultiMoDN can be completely order-invariant and idempotent as shown in [2] and only requires randomization during training to achieve this. For interpretability, we argue that sequential inference (in any order) is far superior to parallel input due to its decomposability: allowing the user to visualize the effect of each input and aligning with Bayesian reasoning.
We will present this improved discussion of our positioning in relation to recent multimodal transformers and LLMs in the final paper.
The reviewer makes excellent recommendations to improve the **clarity of our paper and the formalism of our architecture**. We have integrated all of them, specifically detailing that:
- Decoder parameters are indeed shared across the different modalities.
- The decoder predictions are combined across modalities/modules by averaging the loss. It is interesting to note that a weighted loss scheme could force the model to emphasize certain tasks over others.
- All encoder and decoder parameters are subject to training, except for the pretrained encoders used to generate embeddings in the MIMIC dataset (used to replicate the exact baseline parallel fusion setting).
- When a modality is missing, the encoder is skipped and not trained. (This is also depicted in Figure 1 of the original submission).
The reviewer claims that the **empirical results are weak because our model is not superior to the baseline in terms of performance**.
It is critical to understand that we (purposely) do not aim to improve on the baseline in terms of performance. Rather, we limit the model’s ability to do so, by aligning the pre-extracted features to isolate the comparison of the fusion step. Multimodal multi-task models often have degraded performance compared to their single modality counterparts [3]. We argue that our model is by far superior to the baseline by virtue of being modular, interpretable, composable, robust to systematic missingness and multi-task without impacting performance.
The reviewer would like to see more support for our method’s performance using **various numbers and combinations of inputs at inference**. This is an excellent suggestion. We now include the results of our model on various numbers and combinations of inputs (see Figure 1 of the attached PDF). The baseline would have to impute missing features in all of these combinations, exposing it to catastrophic failure in the event of systematic missingness (Section 6.3).
**LIMITATIONS:**
We are happy that the reviewer feels we have appropriately addressed all limitations in the paper.
**REFERENCES:**
[1] Ma, Mengmeng, et al. "Are multimodal transformers robust to missing modality?" CVPR 2022.
[2] Trottet, Cecile, et al. "Modular Clinical Decision Support Networks (MoDN)—Updatable, interpretable, and portable predictions for evolving clinical environments." PLOS 2023.
[3] Liu, Shengchao, et al. "Loss-balanced task weighting to reduce negative transfer in multi-task learning." AAAI 2019.
---
Rebuttal Comment 1.1:
Title: post-rebuttal assessment
Comment: I would like to thank the authors for their response, and additional material.
The authors have clarified several questions and provided stronger empirical support for their approach, and I have raised my rating for this submission. While I am not opposed to accepting this work, I still feel it is borderline because a major revision would be needed to address and integrate in the submission all the points of this discussion. Even though the empirical validation does not include popular language-X domains, this is however now sufficient to support the proposed approach.
---
Reply to Comment 1.1.1:
Comment: We thank R2 for their response. We are happy that stronger experimental support for MultiModN has been conveyed and that the questions raised by the reviewer have been fully answered. To the best of our knowledge, **we have addressed all of the reviewer’s concerns**.
Note that all experiments and analyses have **already been completed** and all discussion intended to be included in the paper has **already been written** into the rebuttal. We therefore expect this will only be a **minor revision**.
For your perusal, we have already included in the following comment **the changes we intend to make within the allowed additional page**. | Summary: This paper proposes a modular multimodal model that fuses latent representations in a sequence of modality to conduct a combination of predictive tasks. It utilizes a flexible sequence of model and task-agnostic encoders to produce an evolving latent representation for a combination of multi-task, model-agnostic decoder modules to conduct downstream tasks. The authors conduct experiments on benchmarks of different domains to show the effectiveness of the proposed method. The experiments show that the proposed method outperforms traditional monolithic multi-modal models.
Strengths: 1. This paper is well-written and well-organized. The design of methods and experiments are clear and easy to understand.
2. The proposed method is novel and effective. The modules are task-agnostic and can be adapted on any number of tasks on different modalities.
3. The experiments are substantial and convincing. The proposed method shows significantly better performance compared with traditional methods.
Weaknesses: 1. It seems that for single tasks the proposed method cannot provide improvement compared with the baseline.
2. It would be helpful to support the claim that the proposed model can handle any number/combination of tasks by adding more experiments of combinatorial tasks.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The authors should make the figures in the experiments part more clear to see the comparison of the proposed methods and the baseline.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have addressed the technical limitations in the paper. I do not find any negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **SUMMARY:**
We kindly thank the reviewer for this careful evaluation of our work and greatly appreciate the positive review of our “excellent” presentation, methodological soundness, and worthwhile contribution.
We strongly agree with all points raised. We have addressed each in a point-by-point response below and feel that the resulting edits have further improved the manuscript.
**STRENGTHS:**
We are happy that the reviewer feels the paper is well-written, well-organized, clear, and easy to understand. We are also thrilled to see that they appreciate the “novelty and effectiveness” of our proposed model, and that our experiments are substantial and convincing.
**WEAKNESSES:**
The reviewer notes that for single tasks, **our method does not improve compared with the baseline**.
Indeed, this is intended. It is critical to understand that we (purposely) do not aim to improve on the baseline in terms of performance. Rather, we purposely limit the model’s ability to do so, by aligning the pre-extracted features. This is to isolate the comparison of the fusion step. In a phenomenon called “negative transfer”, multimodal multi-task models often have degraded performance compared to their single-modality counterparts [1].
We argue that our model is by far superior to the baseline by virtue of being modular, interpretable, composable, robust to systematic missingness, and multi-task *without* impacting performance compared to the parallel baseline which does not have these other features and will make this point more explicit in the final manuscript.
The reviewer would like to see **more support for our method’s performance using various numbers and combinations of inputs** at inference. This is an excellent suggestion. We now include additional experiments detailing the results of our model on various numbers and combinations of inputs (see Figure 1 of supplemental PDF information). The baseline would have to impute missing features in all these combinations, exposing it to catastrophic failure in the event of systematic missingness as explained in Section 6.4 and supplemental Section E.3.
**QUESTIONS:**
Thank you for the suggestion to improve the experimental figures for clearer understandability. As recommended, we have made several formatting edits to better distinguish MultiMoDN and the baseline directly in the figures. The revised figures are showcased in the attached PDF, with differences highlighted in yellow.
**LIMITATIONS:**
We are happy that the reviewer feels we have appropriately addressed all limitations in the paper.
**REFERENCES:**
[1] Liu, Shengchao, Yingyu Liang, and Anthony Gitter. "Loss-balanced task weighting to reduce negative transfer in multi-task learning." AAAI 2019.
---
Rebuttal Comment 1.1:
Comment: Hi Reviewer pFnK,
Since the discussion with the authors is closing soon, could you please go over the author's rebuttal and provide some feedback?
Regards,
AC | null | null | Rebuttal 1:
Rebuttal: We are grateful to the reviewers for their careful evaluation and very happy to receive positive and high-quality reviews. We agree with all points raised and have been able to reflect and respond to each in detail.
We highlight a major concern of the only reviewer (R2) recommending rejection: our model does not show significant performance improvement compared with the baseline. This is an important misunderstanding, as we actually purposely limit our model performance to fairly compare the multimodal fusion step. At equivalent performance, our model architecture is by far superior to the baseline by virtue of being inherently modular, interpretable, composable, robust to systematic missingness, and multi-task. The aim was to achieve these advantages without impacting performance compared to the baseline, which does exhibit these advantages.
Indeed, R3 cites this approach as an important strength: *“The experiments are done in a very well controlled setting. The authors purposefully used the exact same module architectures and feature extractors as the baseline to ensure fair comparison, and confidence intervals are always included. The experiments convincingly showed that the model is indeed inherently interpretable, and that the model is robust to MNAR compared to P-fusion and does not perform worse than P-fusion overall.”*
**R1** provides a very positive review and has recommended acceptance, citing our *“convincing and sound experiments”* described in a *“well-written, organized, and clear”* manner. They have requested the following:
1. **More support for our method’s ability to use various numbers and combinations of inputs at inference.** We provide 30 new inference experiments in Figure 1 in the attached PDF. The experiments test all combinations of different inference settings from the training setting and show the expected/desired outcome, where our method is not significantly different from the baseline (all 95% CIs overlap) despite having the various advantages of modularity (robustness to systematic missingness, interpretability, portability).
2. **Further clarity on the experimental figures.** We have formatted Figures 3, 4, and 5 and Appendix Figure 11 to better highlight the difference between our model and the baseline. These improved figures are included as Figure 2 in the attached PDF (changes highlighted in yellow).
**R2** provides a generally positive review (*“interesting topic”*, *“relevant to NeurIPS”*, *“sensible and intuitive approach”*), but recommends rejection. They cite several issues motivating this recommendation which have now been fully addressed:
1. **The reviewer recommends a transformer baseline and a commentary on multi-modal LLM approaches.** Based on the related work mentioned by R2, we discuss in detail how our work compares to other approaches (in vision and text). Notably, we have conducted additional experiments to now include a multitask, multimodal transformer baseline (found in Table 1 of the attached results), which is overall less performant than our architecture and does not have the advantages of interpretability or modularity. We further explain how LLMs are not well suited to the task at hand, as they greatly degrade interpretability and introduce the risk of hallucination from previously biased text inputs.
2. **The reviewer asks for various clarifications regarding the model architecture** regarding decoder parameter sharing, decoder loss, trained parameters, and treatment of missing modalities. We have elaborated on each of these areas formally in our architecture section and would like to thank the reviewer for their careful perusal.
**R3** provided a positive review stating *“improving model interpretability and robustness to spurious correlations is extremely important for real-world settings where mistakes can have real impacts”* and recommends acceptance:
1. **The reviewer requests that we experiment with diverse train and test settings** beyond an exact comparison with P-Fusion, aligning with R1. Our new experiments in Figure 1 address this concern.
2. **The reviewer requests architecture clarifications on the inputs of the encoders and the “extension to time series” section**. The input vectors in our experiments are 1D. Depending on the nature of the modalities in the dataset, we support other encoders (CNN, transformers, and other pretrained models) to determine the ideal encoding for each input/modality. For time-series, we present the real-world setting of a data stream, where inference takes place at the same time as data is being received (i.e. predicting student performance at each week of a course as the course is being conducted). These settings are relevant for continuous prediction tasks in EDU and Weather, showing how MultiModN can be used for incremental time series prediction. We will state these descriptions explicitly in the paper.
To summarize the additional experiments, we provide the following results to strengthen our paper in the PDF attachment:
- **New results on all 10 tasks across 3 datasets from a multimodal multitask transformer architecture** based on the related work highlighted by R2 (Table 1). These results show that MultiModN is comparable or better than the Transformer in both single task and multi-task settings, in addition to having advantages that are not present with the Transformer (size, modularity, interpretability).
- **30 new inference modality experiments comparing P-Fusion and MultiModN in different settings and tasks** (Figure 1). These indicate comparable or often better performance in contrast with P-Fusion across diverse inference settings.
- **Improved diagrams from the results sections** to clearly understand the difference between the baseline and MultiModN (Figure 2).
Overall, we thank the reviewers for their thoughtful and expert advice. We hope that we have adequately addressed reviewer concerns and further improved confidence in our work.
Pdf: /pdf/5c43ddfae24faefe644a09b4274007563a3e20f0.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Sharp Bounds for Generalized Causal Sensitivity Analysis | Accept (poster) | Summary: The authors generalize a class of causal sensitivity models that includes the traditional MSM, the continuous-treatment CMSM, and the longitudinal (time-varying treatment) LMSM. They show how to compute sharp bounds for the causal estimands by taking inspiration from recent work. Their general framework also allows mediation analysis. They provide an algorithm for computing these bounds.
Strengths: The method is solid and explained well. Figure 2 is nice. Mediation analysis is an important contribution to causal sensitivity models. A broader understanding of sensitivity models is always valuable and the effort to generalize is a good one.
Weaknesses: My main concern is that this generalization that unifies the MSM, CMSM, and LMSM is not very useful. The weighting function seems a bit contrived. It is necessary because the CMSM and LMSM do not use the nominal propensity at all, but the MSM does. Isn't it strange to ignore the nominal propensity and give the same bounds to all potential outcome distributions? Shouldn't the observed confounding inform the unobserved confounding?
A recent alternative to the CMSM is the $\delta$MSM [arXiv:2204.11206] that takes a different approach and appears to perform better. The authors could discuss alternative models like this one or at least keep them in mind when considering general classes of sensitivity models.
The clever approach to sharp partial identification is not novel [see for instance arXiv:2304.10577]. The benefit of the mediation analysis is not really made clear in the results of this submission.
In terms of results, the authors employ a purely synthetic benchmark. The real-world data are interesting but they lack a ground truth. The authors could include some well-known semi-synthetic benchmarks like IHDP and induce hidden confounding by hiding some of the observed confounders.
The authors do not compare with previous methods in their benchmark except for Table 1, where they use a custom weighting function to beat the older CMSM algorithm. That is not very convincing. Also Table 1 should at least be discussed more. The results for the weighted CMSM seem conflicting: tighter bounds but worse coverage?
On a lesser note, the technical details are a bit dense.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Could you support your reasoning for why this weighting function is a natural interpretation of the more specific sensitivity models? How is it helpful and how can I use it in newer settings? Why should it be set to zero in some cases?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are discussed a bit but they do not address potential societal impacts. It is debatable if that is necessary for this kind of work, but I think pitfalls of these kinds of sensitivity analyses should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful review!
### Response to “Weaknesses”
* Thank you for your comments. The main motivation behind the GMSM definition (and weighting function) is indeed to unify the three important sensitivity models from the literature.
* We argue that our **main contribution** is the derivation of **novel bounds** that are valid for all **sensitivity models**, **distributional effects**, and **mediation/path analysis**. For example, practitioners may use our bounds for the well-established MSM in a mediation analysis setting **without ever explicitly using the GMSM formulation via a weighting function**. Hence, you may simply think about the weighting function as a **notational tool**.
* However, we believe that the weighting function has **additional advantages** beyond providing a unified notation when **domain knowledge about the confounding structure is available**. Consider an observational study on the effect of smoking on cancer risk, confounded by certain unobserved genes. It may be known that these genes do not affect the cancer risk for a certain population with covariates $\mathbf{x} \in \mathcal{X}_0$, e.g., female individuals. Hence we can set the weight function to $q(\mathbf{a}, \mathbf{x}) = 1$ for all $\mathbf{x} \in \mathcal{X}_0$ to obtain sharper bounds for the average treatment effect.
* **The CMSM sets the weighting function to zero because observed confounding is not necessarily informative for unobserved confounding**. For example, $X$ and $U$ could be independent, or at least nothing about the dependence may be known. The binary MSM uses the propensity score $\mathbb{P}(a \mid x)$ as a weighting function because, for large or small propensities, most of the “randomness” in $A$ is explained by $X$. This implies that $U$ can not have a large effect on $A$ for such $x$. However, this kind of **reasoning breaks down for continuous treatments**. Note that the CMSM has been **proposed in several previous papers** and is not our invention. Furthermore, our paper makes **major contributions in settings beyond continuous treatments**.
* **Action:** We will add a detailed discussion about the weighting function to our paper, summarizing the points above.
* The delta-MSM was **published after the NeurIPS deadline** (UAI 2023), which is why we did not include it in the related work at the time of submission.
* We would like to emphasize that it is **not** applicable to **other treatment types**, **mediation/path analysis** or **distributional effects**. Extending our bounds to the delta-MSM may be an interesting direction for future research.
* Benchmarking different sensitivity models is difficult due to different assumptions on the data-generating process. We follow **established literature** on sensitivity analysis and study the optimality of bounds under specific sensitivity models, **not the optimality of the sensitivity models** themselves.
* **Action:** We will add a discussion to the appendix.
* The paper mentioned (arXiv:2304.10577) is one of multiple papers that deals with **CATEs for binary treatments**, which we all cite in our related work section. In our paper, **we never claim that our bounds for binary CATE are novel** (e.g., see our abstract). However, our approach and intuition for deriving the bounds **is novel** (via SCM-based probability mass transport, Fig. 2). In contrast to previous work, it can be easily **generalized to more complex causal inference settings** beyond binary CATE. **Action**: We will clarify this in our section on related work.
* **Mediation analysis** from observational data is crucial in many disciplines such as epidemiology, economics, and algorithmic fairness. It aims to answer questions like “What is the direct causal effect of a medical treatment on health, that is not mediated through a change in diet?”. Our results show that we obtain valid bounds for direct, indirect, and path-specific effects under unobserved confounding. **Action:** We will add a discussion to the appendix.
* We do not compare against more previous works because: (1) For mediation/path analysis and distributional effects, there **are no baselines** for MSM-based sensitivity analysis. (2) For binary CATE, we obtain the same sharp bounds (in population) as previous work. **Action:** We will add a clarification to the experiment section.
* With Table 1, we try to convey two messages: (1) Under the CMSM, we get almost the same bounds as Jesson et al. (2022). This is because their algorithm is an **approximation** of our bounds. In contrast, we have **closed-form solutions**, and we thus achieve better **computational speed**. (2) Under a weighted CMSM, we can obtain **tighter bounds** for the average treatment effect, that is, smaller overall interval lengths and lower coverage for small $\Gamma$. Note that we do **not** claim that the weighted CMSM is superior to the unweighted CMSM. As mentioned earlier, we **refrain from benchmarking sensitivity models**. Instead, we outline the possible use of the weight function for incorporating domain knowledge. **Action:** We will clarify this in the paper.
* **Additional experiments**: Note that we **provided rigorous mathematical proofs for all our results**. However, we agree that adding experiments using IHDP is an excellent idea to improve the experimental section of our paper. \
**Action:** We performed additional experiments using the IHDP data with hidden confounding from Jesson et al. (2021). Note that IHDP is a dataset with binary treatments and no mediators. Here, our contributions are bounds for **distributional effects**. Details and results are provided in the uploaded PDF file. We also updated our anonymized repository.
* Thank you for pointing out the density of technical details. **Action:** We will move certain technical details to the appendix.
### Response to "Limitations"
* **Action:** As recommended, we will add a discussion on potential societal impacts.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed reply. Your explanations helped me to better understand the framing of sensitivity models using the weighting function. I now have a greater appreciation for the generality of this approach and am correspondingly raising my score to a 6.
The reason I am not raising my score further is that I believe the experimental section could have been a bit more fleshed out. The additional IHDP results are interesting, but they appear to only compare distributional/quantile bounds against expectations for a hypothetical downstream task. Since the method proposed in this work generates sharp bounds for a variety of conditions, I would have imagined that the sharpness could be demonstrated in practice to give better bounds than previous approaches, for e.g. partially identifying conditional expectations, in semi-synthetic settings with hidden confounding,
---
Reply to Comment 1.1.1:
Title: Thank you for your response and for raising your score
Comment: Many thanks for acknowledging our rebuttal and for raising your score. Please allow us to elaborate on our experimental results.
We would like to reiterate that the aim of our paper is not to improve on previous results for binary CATE but rather to generalize existing sharp bounds to other sensitivity models, causal estimands, and causal inference settings. For this purpose, we propose an entirely new approach to deriving sharp bounds in Pearl's SCM framework (see Fig. 2).
We agree that generally, benchmarking with previous bounds on (semi-)synthetic datasets is desirable to evaluate performance improvement. However, in most settings where our paper has novel contributions (e.g., mediation analysis, distributional effects) **there exist currently no baselines**. **For binary CATE, we obtain exactly the same (sharp) bounds** as Dorn and Guo (2022). That is, the mathematical formulas for the bounds coincide when setting $\mathcal{D} = \mathbb{E}$ for the MSM in our Corollary 1. We believe that the fact that we obtain the same sharp bounds as previous literature is rather encouraging, and indicates (aside from our proofs and experimental results) that our approach for deriving the bounds (Fig. 2) is indeed correct. For continuous CATE, we provide a comparison in Table 1.
Hence, there is no point in benchmarking our CATE bounds with other approaches for the IHDP data (binary treatment, no mediators). We thus decided to use the IHDP data to illustrate how our bounds for distributional effects can aid decision-making under unobserved confounding. | Summary: This paper is about sensitivity analysis (SA) of causal queries in SCMs. In practice, given a causal query and a set of models, the goal is to compute a query's lower and upper bounds. The authors first derive a class of models to be used for SA and show how this extends existing models. An algorithm to obtain the bounds is such cases is derived.
(After the rebuttal, I decided to raise the rating of the paper from 4 to 6)
Strengths: The technical results are sound and non-trivial.
The experiments show good bounds obtained in this way.
Weaknesses: The literature on partially identifiable queries is ignored. In particular existing techniques for bounding such queries are not considered.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Would it be possible to compare the present method against algorithms for the bounding of non-identifiable queries? E.g. Zhang and Bareinboim, Duarte et al., and Zaffalon et al. worked in this direction in the last two years. I think the sensitivity analysis the authors consider is a heuristic approach to the same problem. If this is true, not having a comparison against these methods is a serious issue.
I also believe that the authors would take advantage of the literature about so-called "imprecise probabilities" and "credal networks" as these models implement the kind of sensitivity analysis of interest for the authors. In particular, the paper "Structural Causal Models Are (Solvable by) Credal Networks" might be a helpful reading.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I don't see specific issues in this direction.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to reviewer H542
Thank you for your review and your helpful comments!
### Response to “Questions”
* Thank you for giving us the opportunity to clarify the difference between two related approaches for bounding causal effects: **Causal sensitivity analysis (CSA)** and **causal partial identification (CPA) without sensitivity models**. In the following, we outline why CSA is **not** a heuristic approach to CPA and why the two approaches should not be benchmarked against each other.
* The main difference between CSA and CPA is that CSA imposes **sensitivity models**, that is, assumptions on the “strength” of unobserved confounding, which is controlled by a sensitivity parameter $\Gamma$. Methods for CPA do not make sensitivity assumptions but instead impose other strong assumptions (see points below). In practice, CSA can be used to test the robustness of causal effect estimates to violations of the unconfoundedness assumption (by varying $\Gamma$), while CPA may be applicable in situations where no domain knowledge about the confounding strength is available.
* **CSA is not a heuristic approach to CPA.** There is a large stream of literature on CSA in both the machine learning and statistics community (see our related work), with most papers deriving rigorous mathematical results on validity and optimality (“sharpness”) of bounds. In our paper, we provide **formal proofs** for all our results in addition to an extensive empirical evaluation.
* In our paper, we study the standard setting for (conditional) average treatment effects with continuous outcomes and optional mediators. Here, CPA is a **special case** of CSA when setting the sensitivity parameter to $\Gamma = \infty$. Hence, CSA provides **provably tighter bounds** than CPA. In fact, CPA bounds are **well characterized** for our setting: For binary treatments, so-called “no assumptions bounds” have been derived in [1]. Indeed, we obtain the same results with our bounds in the limit $\Gamma \to \infty$. For continuous treatments, informative CPA is impossible, that is, the CSA bounds converge to the boundary of the support of the outcome distribution for $\Gamma \to \infty$ [2].
* There is no free lunch in causal inference. Because approaches for CPA do not restrict the strength of unobserved confounding, they often impose **other (strong) assumptions** in order to derive informative bounds. For example, all three mentioned papers (Zhang and Bareinboim 2022, Duarte et al. 2021, Zaffalon et al. 2022) are only applicable to settings with **discrete observed variables**. In contrast, these papers are **not** applicable to continuous outcomes, which is the main focus of our paper. Other papers on CPA derive informative bounds by assuming the existence of valid instrumental variables (e.g., [3, 4]).
* For the reasons above, there is **no point in benchmarking** our bounds against CPA approaches. In our setting, this would reduce to setting $\Gamma = \infty$. Likewise, methods for only discrete variables or instruments are not applicable, and can thus **not** be used as baselines.
* **Summary**: CSA derives bounds under assumptions on the strength of unobserved confounding, and CPA derives bounds under other assumptions and often for more complex/ arbitrary causal graphs (e.g., using valid instrumental variables, or assuming only discrete variables). Both approaches are complementary to each other, and practical usage depends on the assumptions one is willing to impose on the underlying data-generating process.
* **Action:** We will expand our related work on CPA (see Appendix A.1) and include the mentioned references. Furthermore, we will add a detailed discussion comparing CSA and CPA, using the points provided above.
* Thank you for pointing out the literature on imprecise probabilities. It seems to be closely related to the CPA literature as the paper assumes **discrete** **variables** and does **not** impose any **sensitivity models**. We would like to reiterate that, in our setting, CPA is solved and corresponds to setting the sensitivity parameter to $\Gamma = \infty$. However, combining ideas from the imprecise probability literature with CSA could potentially be an interesting direction for future research. \
**Action:** We will expand our related work to include the literature on imprecise probabilities, in particular the paper mentioned. We will also add a discussion on potential future work.
### References
[1] Manski 1990, “Nonparametric bounds on treatment effects”, The American Economic Review
[2] Jesson et al. 2022, “Scalable sensitivity and uncertainty analysis for causal-effect estimates of continuous-valued interventions” NeurIPS
[3] Kilbertus et al. 2020, “A Class of Algorithms for General Instrumental Variable Models”, NeurIPS
[4] Padth et al. 2023, “Stochastic Causal Programming for Bounding Treatment Effects”, CLeaR 2023
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification on the CSA vs. CPA thing. This motivates me to raise my score.
Comment: I appreciate the clarification about the difference between CSA and CPA provided by the authors—many thanks for that. I have experience with CPA but not with CSA. This motivated my question and doubts about the paper. The rebuttal affects my evaluation of the paper, and I am happy to move towards a positive recommendation. Regarding the different points raised by the authors in their rebuttal, I am not very convinced by the argument in the item starting with "There is no free lunch in causal inference". I don't believe that the fact that most of the papers on CPA cope with discrete (endogenous) variables reflects a necessary additional assumption. This is only related to the existing works starting from the more straightforward discrete case. Still, similar techniques will undoubtedly be explored soon also for continuous variables.
---
Reply to Comment 1.1.1:
Title: Thank you for your response and raising your score
Comment: Thank you for acknowledging our response and for raising your score. Please allow us to clarify our argument "There is no free lunch in causal inference". What we meant to say is, that in the standard CATE setting, CPA corresponds to setting $\Gamma \to \infty$ for our bounds. This is because we obtain our bounds by optimizing over all possible SCMs that are compatible with (i) the causal graph, (ii) the observed data distribution, and (iii) the sensitivity constraints. When setting $\Gamma \to \infty$, we ignore (iii) and only constrain our class of SCMs by (i) and (ii), thus performing CPA. **Hence, any other CPA approach for tighter bounds would provably require stronger assumptions.** For example, there are existing CPA approaches that yield tighter bounds by exploiting valid instrumental variables (see e.g., [3, 4]).
In the following, we characterize our (w.l.o.g. upper) bounds for $\Gamma \to \infty$ in the standard CATE setting: Observed covariates $X$, binary/ continuous treatment $A$, and continuous outcome $Y$. We are interested in the causal query $Q(x, a, \mathcal{M}) = \mathbb{E}\left[Y \mid x, do(A = a)\right]$ (Example 1 from our paper). Our upper bound is $Q^+ = \int_{\ell}^{F^{-1}(c_Y^+)} \frac{y}{s_Y^+} \mathbb{P}(y \mid x, a) dy + \int_{F^{-1}(c_Y^+)}^{u} \frac{y}{s_Y^-} \mathbb{P}(y \mid x, a) dy $, where $c_Y^+ = \frac{\Gamma}{1 + \Gamma}$ and $\ell, u$ are the lower/ upper support points of the distribution $\mathbb{P}(y \mid x, a)$, respectively.
1) For binary treatment $A \in \{0, 1\}$, using the MSM we obtain
\begin{equation}
Q^+ = \int_{\ell}^{F^{-1}(\frac{\Gamma}{1 + \Gamma})} y \left((1 - \Gamma^{-1}) \mathbb{P}(a \mid x) + \Gamma^{-1} \right) \mathbb{P}(y \mid x, a) dy + \int_{F^{-1}(\frac{\Gamma}{1 + \Gamma})}^{u} y \left((1 - \Gamma) \mathbb{P}(a \mid x) + \Gamma \right) \mathbb{P}(y \mid x, a) dy \xrightarrow[\Gamma \to \infty]{} \mathbb{P}(a \mid x) \mathbb{E}[Y \mid x, a] + (1 - \mathbb{P}(a \mid x)) u
\end{equation}
This bound is also known as the "no assumptions bound" or "Manski bound", originally derived in [1]. Of note, with our theory, we can derive similar bounds for distributional effects. Hence, **our paper even makes non-trivial contributions to CPA**. We will add this to our paper.
2) For continuous treatments, using the CMSM we obtain
\begin{equation}
Q^+ = \int_{\ell}^{F^{-1}(\frac{\Gamma}{1 + \Gamma})} \frac{y}{\Gamma} \mathbb{P}(y \mid x, a) dy + \int_{F^{-1}(\frac{\Gamma}{1 + \Gamma})}^{u} y \Gamma \mathbb{P}(y \mid x, a) dy \xrightarrow[\Gamma \to \infty]{} u
\end{equation}
That is, for continuous treatments the corresponding "no assumptions bound" is exactly the right support point of the observed distribution (see also [2]). Hence, informative CPA for continuous treatments and continuous outcomes is **not possible without imposing stronger assumptions** (such as IVs).
**Action**: We will add the derivations above to our discussion on CSA vs CPA. | Summary: The authors propose a unified framework for causal sensiitivity analysis under unobserved confounding that generalizes the Marginal Sensitivity Model (MSM). They derive sharp bounds for a diverse set of causal effects such as the CATE, mediation and path analysis effects, and distributional effects. The framework is applicable to discrete, continuous, and time-varying interventions. They offer, to my knowledge, a novel interpretation of the marginal sensitivity model via SCM. They show that in the case of binary treatments, their derived bounds coincide with the optimality result of Dorn and Guo 2023. They provide a closed form solution to and algorithm to estimate the bounds and show empircally that it improves over line search methods (a formal complexity analysis would be nice).
Strengths: This paper makes several novel contributions to an active area of study in causal machine learning, which are listed above in the summary. They provide theoretical and experimental evidence supporting their claims. They do a good job presenting and comparing to the related work. An exceptionally well writen and organized paper given the complexity of the subject matter.
Weaknesses: ## I have one primary comment.
I think there may be a step missing in the special cases proofs of Appendix C. I appreciate how the authors have utilized SCM to define the GMSM. But, in defining the MSM, CMSM, and LMSM in terms of hidden confounders $u$, it seems that there is a step missing from how these models are originally defined. Namely, they are defined with respect to potential-outcomes / counterfactuals $Y_{t}$ and the conditional independence relation: $Y_t \perp T \mid X$. For example, it's not obvious to me how you move from $P(a \mid x, y_t)$, to $P(a \mid x, u)$. Given that you show equivalence in terms of $P(a \mid x, u)$, I think it is important to be explicit here. If this is resolved, and I admit that this could be completely trivial and I just don't see it, I would happily increase my score. If it cannot be resolved, I would suggest removing these claims and my score would remain the same.
## I have a few minor comments.
First, the last two paragraphs of the introduction can be streamlined as the contributions paragraph essentially reiterates the points of the paragraph starting on line 45. I like both styles, with slight preference for the contribution format.
There is a recent paper proposing a marginal sensitivity analysis for continuous treatments that could be added to the related works.
Line 70 "... when while ..." seems to be a typo
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: How does one show that $P(a \mid x, y_t) = P(a \mid x, u)$?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to reviewer taFV
Thank you for your positive evaluation of our paper! We took all your comments at heart and improved our paper accordingly.
### Response to “A formal complexity analysis would be nice”
* Thank you for pointing this out. In the following, we assume that all models are trained and we have a sample $(y_i)_{i=1}^k \sim \hat{\mathbb{P}}^Y(\cdot \mid \mathbf{x}, \mathbf{m}, \mathbf{a})$ (necessary for both estimators) available. Then, our estimator from Eq. (8) has a complexity of $\mathcal{O}(k)$ because it only involves summing and quantile computation. Algorithm 1 of Jesson et al. (2022) has a complexity of $\mathcal{O}(k n)$, where $n$ is the number of grid search points. For Table 1, we choose $k = n = 5000$. \
**Action:** We will add this to our appendix and thereby highlight the benefits our work.
### Response to “Primary comment”
* This is an excellent question. First of all, while we are the first to use Pearl’s SCM framework for MSM-type sensitivity analysis, we are **not** the first to define the MSM in terms of an unobserved confounder $U$ instead of the potential outcomes $Y_t$. Note that the SCM and potential outcome framework are logically equivalent and we decided to use SCMs because they allowed us to think in a new way about the bounding problem for sensitivity analysis (via probability mass transport, see Fig. 2). The following state-of-the-art papers all use the unobserved confounder $U$ in their definitions within the potential outcomes framework:
* [1] Dorn and Guo 2022, “Sharp sensitivity analysis for inverse propensity weighting via quantile balancing”, JASA, Equation (2)
* [2] Dorn et al. 2022, “Doubly-valid/ doubly-sharp sensitivity analysis for causal inference with unmeasured confounding”, arXiv:2112.11449, Definition 1
* [3] Oprescu et al. 2023, “B-learner: Quasi-oracle bounds on heterogeneous causal effects under hidden confounding”, ICML, Assumption 1
* [4] Bonvini and Kennedy 2022, “Sensitivity analysis for marginal structural models”, arXiv:2210.04681, Equation (5)
* We agree that the equivalence of the two MSM definitions is not immediately clear and that the papers above do not seem to provide formal results on this. In [1], the authors only write “However, as pointed out by a referee, these assumptions are equivalent”. In the following, we provide a formal result using the GMSM formulation (note that this is equivalent to the original MSM definition via log-odds ratio).
* Lemma: The following two statements are equivalent:
1) There exist a $U$ with $Y_1, Y_0 \perp A \mid X, U$ so that $s^- \leq \frac{\mathbb{P}(u \mid x, a)}{\mathbb{P}(u \mid x)} \leq s^+$
2) It holds that $s^- \leq \frac{\mathbb{P}(y_1, y_0 \mid x, a)}{\mathbb{P}(y_1, y_0 \mid x)} \leq s^+$
* Proof (sketch):
* Direction 2. $\to$ 1.: Define $U = (Y_1, Y_0)$.
* Direction 1. $\to$ 2.: We proceed via proof by contradiction. Assume there exists a pair $(y_1, y_0)$ that violates 2), say w.l.o.g. $\frac{\mathbb{P}(y_1, y_0 \mid x, a)}{\mathbb{P}(y_1, y_0 \mid x)} > s^+$. We can use the ignorability condition from 1. to write $\mathbb{P}(y_1, y_0 \mid x, a) = \int \mathbb{P}(y_1, y_0 \mid x, u, a) \mathbb{P}(u \mid x, a) du = \int \mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x, a) du $. Furthermore, $\mathbb{P}(y_1, y_0 \mid x) = \int \mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x) du$. It follows that $s^+ < \frac{\int \mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x, a) du}{\int \mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x) du} $. Hence, there exists a $u$ such that $s^+ < \frac{\mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x, a)}{\mathbb{P}(y_1, y_0 \mid x, u) \mathbb{P}(u \mid x)} = \frac{\mathbb{P}(u \mid x, a)}{\mathbb{P}(u \mid x)}$, which is a contradiction to the sensitivity constraint from 1.
* **Action**: We will add a section to the appendix where we discuss the two definitions and prove their equivalence with the arguments provided above.
### Response to “Minor comments”
* Thank you for pointing out the redundancy. **Action:** We will streamline the last two paragraphs of the introduction into a larger “Contribution” paragraph, as proposed.
* We assume that you are referring to the following paper: “Partial identification of dose responses with hidden confounders”, Marmarelis et al., UAI 2023. Thanks for pointing this out! The paper was published after the NeurIPS deadline, which is why we did not include it in the related work at the time of submission. We would like to emphasize that the paper derives bounds for a different sensitivity model that is only applicable to continuous treatments and **not** to binary or time-varying treatments, and mediation or distributional effects. **Action:** We will add the paper to the related work and also add a discussion to the appendix, in which we compare the different sensitivity models for continuous treatments.
* Thanks for pointing out the typo! **Action:** We will carefully proofread our paper.
---
Rebuttal Comment 1.1:
Title: Thank you for your response. You have addressed my concerns and I would like to raise my score to an 8.
Comment: I have read your response and I appreciate the efforts you have made to address my concerns. I'm particularly impressed by the lemma you have provided, as the relationship there is something that has bothered me for some time. Trusting that the action points will be incorporated into the camera ready version, I would like to increase my score to an 8.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our response and for providing swift feedback. We are happy that we were able to resolve the ambiguity regarding the sensitivity model definition. We will incorporate all action points as promised.
Thank you also for your willingness to increase your score to an 8. We saw in the system that the number did not change yet, and, for that reason, we wanted to simply follow back if this is still a to-do or if this is something with OpenReview. If you have any further questions or requests, please let us know. | Summary: The authors study the problem of bounding a given causal effect. To this end, they propose a generalized marginal sensitivity model (GMSM) that is applicable to multiple discrete, continuous , and time-varying treatments. They also present a new interpretation of the partial identification.
Strengths: The proposed GMSM model generalizes the previous models and leads to sharp bound for certain causal effects with certain causal graphs.
The bounds depend on observed variables and thus can be estimated.
Except some minor ambiguities (see below), the paper is well-written.
Weaknesses: Although the proposed model generalized the previous models but the presented theoretical results hold under specific graphical constraints e.g., no confounder between M and outcome, no hidden confounders between X and $\{A,M,Y\}$.
The other weakness about the results is its generalizability to arbitrary causal graphs. Based on the presented proofs in the appendix, it is not clear how this results can be generalized by relaxing the assumptions.
There is ambiguity about the notation of U. Does $U_Y$ denote the unobserved exogenous variable for Y in the definition of SCM or is it a hidden confounder between Y and A? Is it possible to have hidden confounders among the mediators (e.g., $M_1$ and $M_2$)?
The explanation below (3) says “If U_w has no effect on A, Eq. (3) holds with $s^-_W (a, x) = S^+_W (a, x) = 1$”. But it seems that (3) encodes the effect of A on $U_W$.!
In (35) in the Appendix, what is U exactly? Does the setting imply that u and x are independent, i.e., $p(u|x)=p(u)$?
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: See above.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Response to reviewer 9HdS
Thank you for your positive review and your helpful comments. We improved our paper in the following ways:
### Response to “Weaknesses”
* Thank you for giving us the opportunity to clarify the assumptions in our paper.
* The assumption that no unobserved confounders exist between mediators $M$ and outcome $Y$ is necessary to provide results for **nested counterfactuals** (see point three below for a more detailed discussion). We would like to emphasize that we need this assumption only for settings with mediators (i.e., Corollary 2) and **not** for our results without mediators (Corollary 1). Furthermore, this assumption is **weaker** than assumptions usually taken in the mediation literature (see point three).
* We do **not** assume that there is no unobserved confounding between $X$ and $A, M, Y$. For example, we allow for correlation between $X$ and $U$, so that $U$ could also be an unobserved confounder between $X$ and $Y$. Technically speaking, we only assume that $\{X, U_W\}$ is a sufficient adjustment set for the $A-W$ relationship ($W \in \{M, Y\}$). This is equivalent to ignorability assumptions in the potential outcomes framework like $Y_1, Y_0 \perp A \mid X, U$ (for binary treatments), which are **standard in the literature** on unobserved confounding and imply that $U$ captures all unobserved confounders (Dorn and Guo 2022).
* **Action:** We will add a detailed discussion of our assumptions to our paper.
* We agree that our results cannot be generalized for arbitrary causal graphs in a straightforward way. However, the aim of our paper is not to provide results for arbitrary graphs, but rather for (conditional) treatment effects and mediation/ path analysis. We argue that the setting we study in our paper is already **very general compared to current state-of-the-art** work on causal sensitivity analysis. In particular, we are the first to study MSM-type sensitivity analysis for mediation and path analysis. Furthermore, our setting includes the whole literature on (conditional) average treatment effect estimation as a special case. We believe that extending our results to other causal graphs (e.g., with instrumental variables) is an interesting direction of possible future research, but out of the scope of this paper. \
**Action:** We will expand our section on future work and limitations, and discuss possible causal inference settings which may be of interest for future work on sensitivity analysis.
* Thank you for pointing out the ambiguity in our notation $U_W$. All variables $U_W$ we consider explicitly are unobserved confounders between treatment $A$ and mediator/ outcome $W$ and not exogenous noise. The latter is implicitly considered as part of the SCM definition. In Theorem 1, we assume that there exist no unobserved confounders between mediators, e.g., between $M_1$ and $M_2$.
* The main reason for this assumption is that it allows us to interpret the causal query $Q(\mathbf{x}, \overline{\mathbf{a}}, \mathcal{M})$ from Equation (1) as a path-specific effect. Path-specific effects are defined as so-called nested counterfactuals that lie in the third layer of Pearl’s causal hierarchy. Using the assumption from Theorem 1, they can be reduced to the query $Q$, which lies in layer 2 of Pearl’s hierarchy, i.e., only depends on interventions (Correa et al. 2021). Our sensitivity analysis then bridges the gap from layer 2 to layer 1 (observational data). We believe that, in principle, relaxing the assumption should be possible. For example, one could consider combining our results with a sensitivity analysis from layer 3 to layer 2, e.g., by imposing a sensitivity model on the level of confounding between mediators $M_1$ and $M_2$. While this goes beyond the scope of our paper, it seems like an interesting direction for future work.
* Note that **we rely on this assumption only in mediation settings**. A large part of our contribution is the derivation of bounds for distributional effects for different treatment types in settings without mediators (Corollary 1), where we do not impose such an assumption.
* We would like to emphasize that our assumptions are **weaker** than most state-of-the-art work for mediation/path analysis (e.g., Shpister and Tchetgen Tchetgen 2016). Most work assumes unconfoundedness between all variables, while we allow for unobserved confounding between treatment and mediators/ outcome and only assume unconfoundedness between the mediators and outcome.
* **Action:** We expand our discussion of the assumption from Theorem 1, clarifying that we do not allow for unobserved confounding between mediators. We also expand our section on future work and clarify the definition of $U_W$.
* In Eq. (3), $U_W$ is by definition a parent of $A$ in the causal graph. Eq. (3) only limits the level of dependence (in the language of probability theory) between the random variables $U_W$ and $A$. Here, the order of the variables in $\mathbb{P}(u_W \mid x, a)$ does not reflect the causal order between $U_W$ and $A$. Note also that for the MSM we can simply rewrite the fraction from Eq. (3) as $\frac{\mathbb{P}(a \mid x, u_W)}{\mathbb{P}(a \mid x)}$ (see Appendix C), which reverses the order of the variables in the probabilities. \
**Action:** We will add a clarification to the paper.
* Eq. (35) is part of Appendix C, where we consider the three important sensitivity models MSM, CMSM, and LMSM. These sensitivity models are defined without mediators, which means that we only have one unobserved confounder $U = U_Y$ between treatment $A$ and outcome $Y$. We do **not** assume that U and X are independent, as this would be a very strong assumption. For example, if we are interested in the effect of smoking ($A$) on cancer risk ($Y$), $U$ may be specific gene expressions correlated with observed confounders $X$ such as medical history. \
**Action:** We will add a clarification to Appendix C. | Rebuttal 1:
Rebuttal: ## Response to all reviewers
Thank you very much for the constructive evaluation of our paper and your helpful comments! We addressed all of them in the comments below and uploaded additional empirical results as a PDF file. Our main improvements are the following:
* We provided **clarifications** regarding our assumptions. Therein, we explain that our assumptions are weaker than in previous literature.
* We discussed the **connection to related literature** (e.g., partial identification without sensitivity models) and spell out explicitly how our work is different and novel.
* We obtained **new empirical and theoretical results**, including a new experiment on semi-synthetic data.
We will incorporate all changes (labeled with **Action)** into the camera-ready version of our paper. Given these improvements, we are confident that our paper will be a valuable contribution to the causal machine learning literature and a good fit for NeurIPS 2023.
Pdf: /pdf/4de295e07fa8df541f663b9bed2dea37591f07f7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CAST: Cross-Attention in Space and Time for Video Action Recognition | Accept (poster) | Summary: This paper proposes a novel two-stream architecture called Cross-Attention in Space and Time (CAST) for video action recognition. The proposed architecture achieves a balanced spatio-temporal understanding of videos using only RGB input.
Strengths: 1. The paper is well-written and well-structured.
2. The implementation of Adapters in the cross-attention mechanism is impressive.
3. The paper presents sufficient experiments.
Weaknesses: In Table 2, when compared to VideoVAE, CAST only achieve minor improvement on the SSv2 dataset. The authors are encouraged to discuss why the cross-attention mechanism appears to have a limited impact on enhancing performance for motion-dominated datasets, like SSv2. Likewise, it would be valuable to have an analysis explaining why the proposed method underperforms compared to AIM (CLIP pretrained only) on the K400 dataset.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: See weakness
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great questions. We address the issues raised by the reviewer below.
* Why does CAST show minor improvement on the SSV2 dataset compared to VideoMAE?
The main reason for the minor improvement is that the SSV2 is an object appearance agnostic action dataset. SSV2 consists of actions such as “pushing something from left to right”, “pushing something from right to left”, “throwing something”, “squeezing something”, and so on. Temporal experts could show strong performance in discriminating such movements, not specific objects. Given that VideoMAE already serves as a strong temporal expert, adding fine-grained spatial information from the spatial expert (CLIP) may not provide significant additional benefits for this specific dataset.
It is crucial to clarify that the primary objective of our work is not to achieve the best performance on a particular dataset. Instead, our focus is on achieving a balanced spatio-temporal understanding, which results in good performance across datasets with diverse characteristics. As evident in our results, CAST demonstrates strong performance in this respect, achieving a harmonic mean of 71.5% across EK100 verb, noun, SSV2, and K400 datasets. This demonstrates the effectiveness of CAST in achieving balanced performance across different action recognition tasks and datasets.
* Why the proposed method underperforms compared to AIM on the K400 dataset?
We have investigated the reason for the lower performance of CAST compared to AIM on the K400 dataset. We found that the reason is mainly due to the preprocessing steps. In prior works such as VideoMAE, the videos are resized so that the shorter side is 320 pixels, and then random crops are taken from the resized video. However, in our previous implementation, we resized the videos so that the shorter side is 256 pixels. This discrepancy in preprocessing accounts for the lower performance of CAST compared to AIM on the K400 dataset. When we use the same video resizing as prior works, we achieve 85.4% accuracy on the K400 dataset, outperforming AIM-B (84.5%). | Summary: This paper presents an approach to action recognition that is based on the fusion of two streams of analysis. This involves a spatial stream and a temporal stream that interact through a novel mechanism to improve classification performance.
Strengths: 1. The paper is generally well written
2. The cross-attention mechanism is quite novel with respect to methods for 2 stream fusion.
3. Results appear to be quite good on standard benchmarks, and best when considering average performance
Weaknesses: 1. The B-CAST mechanism is quite complex and hard to unpack. With that said, I'm not sure there is a simpler way of framing this, but I would encourage the authors to consider simplifying the description (if possible). Figure 3 is fantastic in this regard.
2. The results only marginally improve the state of the art and are second best on the two main benchmarks considered, but bested by different models in each case. I wouldn't hold the paper back due to this, but it makes the contribution a bit weaker.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Compared to other models, the proposed model appears to be better at verbs and weaker at nouns. Is it possible for the authors to comment on that trade-off?
2. Figure 6 shows a few good illustrated examples. Would the authors consider presenting more examples (e.g. as Supplementary Material)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is recognition of some of the limitations of the work throughout the paper, but not a dedicated statement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive suggestions. We address the issues raised by the reviewer below.
* Straightforward unpacking of B-CAST mechanism
To facilitate a straightforward understanding of the B-CAST architecture, we present two additional figures in the global response PDF attached. Figure 1 illustrates how the model transforms and computes features in detail with tensor dimensions. After passing the previous layer output through the MHSA of each expert, we reduce the feature dimension by half with a down projection layer followed by layer normalization. Subsequently, the two experts exchange features, reshaping them according to the window shape. After the cross-attention, we upscale the feature by a factor of two using an up-projection layer. For an in-depth understanding of tensor dimensions, we direct the reviewer to Table 1 of the supplementary material.
Figure 2 demonstrates where the proposed cross-attention mechanism attends to. In Temporal-to-Spatial (T2S) cross-attention, the query corresponds to a spatial patch of the spatial expert. In T2S, the query attends to temporal patches at the same position across all frames. We term this cross-attention window shape as “time”. Conversely, in Spatial-to-Temporal (S2T) cross-attention, the query represents a patch of the temporal expert. In S2T, a temporal query attends to all spatial patches of the same frame. We term this cross-attention window shape as “space”. We empirically find that using T2S and S2T shows favorable performance compared to alternative choices in Table 1 (e) of the main paper.
* Performance gap
We have investigated the reason for the relatively marginal performance improvement of CAST compared to state-of-the-art methods on the K400 dataset. We found that the reason is mainly due to differences in the preprocessing steps. In prior works such as VideoMAE, the videos are resized so that the shorter side is 320 pixels, and then random crops are taken from the resized video. However, in our previous implementation, we resized the videos so that the shorter side is 256 pixels. This discrepancy in preprocessing accounts for the lower performance of CAST compared to AIM on the K400 dataset. When we use the same video resizing as prior works, we achieve 85.4% accuracy on the K400 dataset and 77.9% harmonic mean of SSV2 and K400 accuracies surpassing the previous state-of-the-art (77.2%). On the EK100, the previous state-of-the-art method OMNIVORE trained with an external video dataset, K400, shows 49.9% action accuracy. When we train CAST with an external video dataset (Wang et al., 2022 [34] of supplementary material), we achieve 50.2% action accuracy as shown in Table 8 of supplementary material.
* Verb-Noun performance trade-off on EK100
In comparison to OMNIVORE (with 69.5% accuracy for verbs and 61.7% for nouns) and MTV-HR (with 68.0% accuracy for verbs and 63.1% for nouns), CAST exhibits a larger gap between verb and noun prediction accuracies, achieving 72.7% accuracy for verbs and 60.6% for nouns. This can be attributed to the fact that the noun prediction task in the EK100 dataset entails a highly detailed 300-way classification of kitchen objects, representing a fine-grained classification challenge. Both MTV-HR and OMNIVORE are pre-trained on image datasets as well as video datasets (MTV-HR pre-trained on the ImageNet-21K and K400, and OMNIVORE pre-trained on the ImageNet-21K, ImageNet-1K, SUN-RGBD, and K400), which contributes to their superior performance in fine-grained noun classification. In contrast, CAST utilizes a spatial expert that is based on CLIP, an image-text contrastively pre-trained model. It is plausible that CLIP, despite its pre-training, may exhibit relative weaknesses in fine-grained noun classification compared to MTV-HR and OMNIVORE. Importantly, MTV-HR employs high-resolution image pre-training. Notably, similar to CAST, other CLIP-based methods like ST-Adapter and AIM also demonstrate relatively wider gaps between verb and noun accuracies. ST-Adapter yields 67.6% accuracy for verbs and 55.0% for nouns, while AIM achieves 64.8% accuracy for verbs and 55.5% for nouns. CAST surpasses these approaches through the effective bi-directional cross-attention-based information exchange between spatial and temporal experts, ultimately resulting in stronger performance across both verb and noun prediction tasks. We believe that this gap can be reduced by employing a spatial expert model with enhanced fine-grained classification capabilities, for instance, one pre-trained with higher-resolution images as seen in MTV-HR.
* More qualitative examples
We show more qualitative examples in Fig. 9 of supplementary material.
* Limitations
We have dedicated limitation and broader impacts sections in the supplementary material. | Summary: This paper presents two-stream vision transformers, dubbed CAST, for balanced spatiotemporal video representation learning. Given the two experts, CLIP [36] and VideoMAE [46] for spatial and temporal expert, the proposed B-CAST module allows the exchange of complementary information across the separte experts via MHCA. Such information exchange appears to be crucial for achieving balanced performances on appearance-centric (K400, Noun prediction in EK) and motion-centric benchmarks (SSv2, Verb prediction in EK).
Strengths: 1. The paper deliberately engineers to integrate two separate research streams, adapter [34, 60] and representation fusion [12, 33, 59], into one framework for balance spatiotemporal video representation learning.
2. The proposed method achieves strong performances on both appearance- and motion-centric benchmarks.
3. Extensive investigation with several instantiations for CAST conducted in the ablation studies.
Weaknesses: 1. In Tab. 1b, why are the three baseline models fully fine-tuned without adapters? Given that a partially tuned model with adapters is reported to be more effective than a fully fine-tuned model [34, 60], the comparison seems unfair. Reevaluating these models with adapters would make for a more fair comparison.
2. Concerning the first point, are the model "independent experts" in Tab. 1a partially tuned with adapters or fully fine-tuned as in Tab. 1b? If it's the latter, for a fair comparison, CAST should be compared with an ensemble of two experts partially trained with adapters rather than with fully fine-tuned ones.
3. For a more in-depth understanding, I suggest including the number of parameters, FLOPs, and, if possible, inference throughput in Tab. 1, particularly in Tabs. 1a, 1b, and 1d..
4. The proposed method appears to increase computational cost and training & inference speed. Please provide a comparison of FLOPs, training & inference throughput with the baseline experts [36, 46], their ensemble (“independent experts” in Tab.1a), and other adapter-based methods [34, 60]. This would allow readers to comprehensively evaluate the applicability of the proposed method.
5. In Tab.1e, there is only a small performance gap between between CAST (4th row) and the model in 3rd row, i.e, (T2S, S2T) = (space, time). Does this imply that, under conditions where attention is optimized (L278), the important factor is layer-wise mixing of the features from the two experts, rather than the specific method of mixing? For your reference, I consider the "Lateral" model in Tab.1b, which linearly fuses each other, to be an unfair comparison to the models in Tab.1e due to its less computation. It would be better to elaborate the discussion including results of (T2S, S2T) = (space, time), (time, time) or other variants.
6. Table 2 should include information on the pretraining dataset, FLOPs, # trainable parameters, # frames, and, if possible, inference speed for a comprehensive comparison.
Technical Quality: 1 poor
Clarity: 2 fair
Questions for Authors: 1. Is bi-directional information fusion necessary? What would be the result if we drop one of T2S MHCA or S2T MHCA?
2. Has any analysis been conducted on verb prediction results similar to the approach taken in Fig. 5?
3. (Minor) There is a discrepancy between the captions in Figure 4 and the actual subfigures. As a result, the text in lines 188-195 and 204-212 should be adjusted to refer to the correct subfigures.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 1 poor
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have provided a comprehensive discussion on the potential limitations and social impact of their work in Supp. 8 and 9. If the proposed model is found to require more computational cost or slows the processing speed down due to two-stream architecture, this inefficiency should also be considered as a limitation, which would pave the way for future improvements.
[Justification of the rating]
I acknowledge that this paper deliberately integrates two separate methods, adapter [34, 60] and feature fusion [12, 33, 59], into one module for balance spatiotemporal video representation learning. However, I’m concerned with the fairness of the experiment (refer to W1, W2, W3, and W6) and efficiency (W4). Consequently, my preliminary rating leans towards “borderline reject,” but I remain open to changing the score if the authors provide adequate discussions during the rebuttal period.
[Post-rebuttal justification]
Most of my concerns are well addressed by the rebuttal. I've gone through the rebuttal and found the provided experimental results seems to be reasonable and solid. I strongly recommend adding these results and the related discussions to the final manuscript. Consequently, I raise my rating to "weak accept."
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Fair comparison**
For a fair comparison with baselines, we augment Table 1 (a), (b), and (d) of the main paper with total parameters, GFLOPs/View, and throughput.
***Fusion baselines***
For a fair comparison with baseline information exchange methods, we add adapters to add, concat, and lateral fusion baselines. We fine-tune the adapters only. We show the results in Table below.
|Method|Late|Layer-wise|Total Param(M)|Tuneable Param(M)|GFLOPs/View|Throughput(V/s)|EK100 Verb|EK100 Noun|EK100 Act|
|-|-|-|-|-|-|-|-|-|-|
|Add|✅||187|15|343|34|68.9|56.6|44.2|
|Concat|✅||188|16|343|34|69.2|56.4|44.5|
|Lateral||✅|201|29|366|32|68.9|49.1|39.0|
|CAST||✅|217|45|391|28|72.5|60.3|48.7|
CAST shows significant improvement over all the baselines with adapters. CAST shows more than 4 points improvement compared to the baselines.
***Indep. experts baselines***
For a comprehensive comparison, we present the results of independent experts with and without adapters in the table below.
|Method|Total Param(M)|Tuneable Param(M)|GFLOPs/View|Throughput (V/s)|EK100 Verb|EK100 Noun|EK100 Act|
|-|-|-|-|-|-|-|-|
|Indep. experts w/o adapter|172|172|321|38|70.7|50.1|40.0|
|Indep. experts w/ adapter|187|15|343|34|68.1|54.2|41.7|
|Ensemble of experts w/ adapters|188|15|343|33|68.2|55.3|42.9|
|CAST|217|45|391|28|72.5|60.3|48.7|
CAST surpasses all the baselines compared. Most importantly, CAST achieves an impressive 5.8-point boost over the ensemble experts with adapters baseline.
We observe that CAST shows a quite good trade-off between the performance and computation, compared to the baselines compared.
***Projection ratio***
We choose the projection ratio of 1/2 as it yields the best trade-off between the computation and action accuracy.
|Method|Total|Tuneable|GFLOPs/View|Throughput(V/s)|EK100 Verb|EK100 Noun|EK100 Act|
|-|-|-|-|-|-|-|-|
|1/8|191|19|351|31|70.7|59.9|47.4|
|1/4|198|26|361|30|71.3|59.8|47.4|
|1/2|217|45|391|28|72.5|60.3|48.7|
|1|275|103|483|24|72.1|59.8|48.6|
**Comprehensive comparison with existing methods**
We compare the computational cost of CAST with independent expert models and other adapter-based models, presenting the results in the table below. Among these, CLIP-B/16 with adapter represents a partially tuned spatial expert, while VideoMAE-B/16 with adapter signifies the temporal expert.
***Training:***
We report the single-step time (including forward pass, backward pass, and parameter update) using the EK100 dataset on a single GPU (RTX 3090) with 24GB of memory. To ensure a fair comparison, we exclude data loading and distributed communication from the step time measurement. We maintain an equal batch size of 6/GPU across all models.
***Inference:***
We measure latency with a batch size of 1, focusing solely on the forward pass time. We also measure throughput with a batch size of 32.
|Method| Total Param(M) | Tune Param(M) | Frames | Step Time (S)| VRAM (GB)|GFLOPs/View|Latency (S)|Throughput(V/s)|EK100 Verb|EK100 Noun|EK100 Act.
|-|-|-|-|-|-|-|-|-|-|-|-|
| CLIP-B w/ adapter |93|7|8|0.14|6.6|152|13.7|97|54.8|54.8|35.0|
| VideoMAE-B w/ adapter|94|7|16|0.21|13.0|192|20.1|57|68.4|48.1|38.2|
| ST-Adapter-B|93|7|16|0.31|12.6|-|22.9|49|67.6|55.0|-|
| AIM-B|97|11|16|0.35|12.7|405|31.5|37|64.8|55.0|41.3|
| CAST-B|217|45|16|0.44|21.0|391|48.2|28|72.5|60.3|48.7|
**Attention window shape**
We show the exhaustive search results in the table below.
| Window shape T2S | Window shape S2T | Tune param(M) | Verb| Noun | Act. |
|-|-|-|-|-|-|
| time | space-time | 57 | 71.5 | 60.9 | **48.7** |
| time | space | 45 | 72.5 | 60.3 | **48.7** |
| time | time | 43 | 72.1 | 60.6 | 48.6 |
| space | space-time | 59 | 71.5 | 60.3 | 48.4 |
| space | space | 47 | 72.3 | 60.2 | 48.5 |
| space | time | 45 | 72.0 | 60.2 | 48.5 |
| space-time | space-time | 72 | 71.0 | 59.3 | 47.2 |
| space-time | space | 59 | 71.9 | 60.3 | 48.4 |
| space-time | time | 57 | 71.3 | 60.3 | 48.2 |
The experimental results show that the performance is not quite sensitive to the window shape. We choose the time attention for T2S and space attention for S2T, as this configuration offers a good trade-off between the number of learnable parameters and action accuracy.
**Augmenting Table 2.**
We will add the information on the pretraining dataset, FLOPs, the number of trainable parameters, the number of frames, and inference speed in Table 2 of the revised paper.
**Bi-directional cross-attention**
To validate the effectiveness of bi-directional cross-attention, we compare it with uni-directional cross-attention by individually omitting S2T and T2S in the table below.
| Method | Total Param(M) | Tune Param(M) | Throughput (V/s) | Verb | Noun | Act. |
|-|-|-|-|-|-|-|
| Indep. experts w/ adapter | 187 | 15 | 34 | 68.1 | 54.2 | 41.7 |
| S2T only | 210 | 38 | 30 | 71.2 | 55.0 | 43.7 |
| T2S only | 208 | 36 | 30 | 68.7 | 60.5 | 46.7 |
| CAST | 217 | 45 | 28 | 72.5 | 60.3 | 48.7 |
CAST's bi-directional cross-attention outperforms uni-directional cross-attention. CAST achieves a 5-point enhancement over the S2T-only baseline and a 2-point improvement over the T2S-only baseline.
**Category-wise analysis of verb prediction performance**
We have conducted an analysis of the verb prediction performance across different verb categories, not super classes, and we present the results are in Figures 5 and 6 as well as in Section 6 of the supplementary material. There is no super class definition for verbs in the EK100 dataset.
**Missing caption in Figure 4.**
We will include the missing caption for Figure 4 (c) and make the necessary revisions to the text in lines 188-195 and 204-212 accordingly in the final version.
**Limitations**
In the final version, we will acknowledge the efficiency reduction as a potential limitation.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: I appreciate the authors for providing the rebuttal. I've gone through the rebuttal and found the provided experimental results appear to be reasonable and solid. I highly recommend revising the paper to incorporate these results and the related discussions from the rebuttal. Consequently, I raise my rating to "weak accept." | Summary: The manuscript proposes an approach of action recognition. The key idea behind the proposed approach is the usage of a two stream network where one network is specialized to encode the spatial details while the other to encode the temporal details. The approach also presents additional cross connections to effectively integrate the information from one stream to another. Experiments on three benchmark datasets shows that the proposed approach is capable of outperforming existing approaches on datasets requiring spatial reasoning, temporal reasoning and spatio-temporal reasoning.
Strengths: The paper is written clearly explaining the various contributions and the underlying motivations. Majority of the existing approaches consider action recognition as a generic problem. However, the analysis presented in the manuscript showing the performance variations of existing approaches on datasets requiring spatial reasoning, temporal reasoning and spatio-temporal reasoning is quite intuitive and useful to the research community. The various design choices are validated with an extensive set of ablation studies. This includes the impact of the various backbones, pretraining strategies and the contributions. The improved performance on the different datasets compared to existing approaches showcases the effectiveness of the proposed approach. Moreover, this work could inspire the action recognition research community to move away from the paradigm of one model for all.
Weaknesses: Two stream networks for action recognition is not a novel concept. For instance there are works that use RGB input in one network and optical flow in the other [1,2], two RGB input streams sampled at different frame rates [3] or spatial resolution [4], etc. However, there is no discussion regarding these approaches in the manuscript. The work of [5] is also similar to the proposed approach as it uses two different encoders for extracting spatial and temporal features. Even though the proposed approach is not exactly derivative of the few above-mentioned approaches, it is highly recommended to include a discussion comparing these approaches.
In order to predict the final action class, the adapter output of the CLS token of spatial expert and the adapter output of the global average pooled features of temporal expert is added followed by a classification layer. This addition operation across the features may diminish the discriminative information present in the individual streams. Instead, one may predict the action logits separately followed by an average fusion of the logits. Even though this operation is the same as the one presented in the paper, the compute graph is different and hence the gradient flow.
Even though the proposed approach results in improved performance compared to existing approaches, the comparison sometimes seems unfair. For example, the spatial expert of the best CAST model is initialized with CLIP embedding while the temporal expert is initialized with VideoMAE pretrained on SSV2. The same goes for other datasets as well. Could the performance improvement be due to this additional data? Agreeing the fact this is the main concept of the proposed approach, the compute requirements needed for this additional pretraining also needs to be taken into context. Even though the computational complexity in terms of FLOPs is reported in the supplementary material, run time latencies could tell a different story. The reviewer recommends the authors to compare the run time latencies of the proposed approach with existing approaches. The number of trainable parameters reported in the supplementary material is also not indicative of how heavy the model is. It is recommended to report the total number of parameters present in the final model. This will be helpful to researchers while selecting a model by taking into consideration the compute/size vs accuracy tradeoff.
[1] Simonyan, Karen, and Andrew Zisserman. "Two-stream convolutional networks for action recognition in videos." Advances in neural information processing systems 27 (2014).
[2] Feichtenhofer, Christoph, Axel Pinz, and Andrew Zisserman. "Convolutional two-stream network fusion for video action recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[3] Feichtenhofer, Christoph, et al. "Slowfast networks for video recognition." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
[4] Fan, Quanfu, et al. "More is less: Learning efficient video representations by big-little network and depthwise temporal aggregation." Advances in Neural Information Processing Systems 32 (2019).
[5] Jiang, Bo, et al. "Two-pathway transformer network for video action recognition." 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the weaknesses section for detailed queries.
What is the CLS(.) operation in line 126?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The manuscript adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Related work**
CAST is similar to two-stream networks [1,2] as it learns two streams for spatial and temporal context. Unlike these approaches involving optical flow estimation, CAST uses only RGB streams. Similar to CAST, SlowFast networks [3] and bLVNet-TAM [4] also use only RGB streams. SlowFast employs slow and fast streams for spatial and temporal information but differs from CAST's transformer-based architecture.
CAST and bLVNet-TAM [4] have distinctions. CAST aims for balanced spatio-temporal understanding, while bLVNet-TAM prioritizes computation efficiency, using different stream resolutions. CAST's unique design features bi-directional cross-attention, while bLVNet-TAM relies on temporal shifting.
Jiang et al. [5] propose uni-directional cross-attention between RGB and edge encoders. CAST's bi-directional cross-attention in bottleneck architecture achieves effective fusion. This CAST architecture outperforms discussed methods [3-5], as shown in the tables below.
| Method | Backbone | SSV2 | K400 |
|-|-|-|-|
| SlowFast | ResNet-101 | - | 79.8 |
| bLVNet-TAM | bLResNet-50 | 65.2 | 73.5 |
| CAST | CAST-B | 71.6 | 85.4 |
We will add the missing references [4,5] as suggested, building upon the reviewer's insightful feedback that enhances our analysis and comparisons with related works [1-3].
**Classification head architecture**
We have adopted the recommended head architecture, wherein we independently predict action logits for each expert and then average the logits, as depicted in Fig. 3 of the global response PDF. The results using this architecture are presented in the following table. For K400 experiments, we utilize a VideoMAE pre-trained on the hybrid dataset [34] as the temporal expert for both the methods compared.
| Method | SSV2 | K400 |
|-|-|-|
| add token | 71.6 | 85.6 |
| avg logit | 71.8 | 85.4 |
The suggested head architecture yields a minor 0.2-point improvement on the SSV2 dataset and a 0.2% point decline on K400 compared to the architecture used in the main paper. These results imply that discriminative information might already be learned through cross-attention in each layer.
**Fair comparison: data-efficiency**
Compared to AIM and ST-Adapter, CAST does not require extra training data. In our main setup, similar to AIM and ST-Adapter, we use the pre-trained CLIP model as the spatial expert. However, we utilize VideoMAE as the temporal expert, training it from scratch on the target dataset. For example, on EK100, we first self-train VideoMAE and then fine-tune CAST end-to-end. This means CAST only needs additional VideoMAE pre-training time on the target dataset, without any other extra requirements compared to AIM and ST-Adapter.
In contrast to non-adapter-based methods like VideoMAE, CAST indeed demands more pre-training time and additional training data. For a fair comparison, we conducted an additional experiment, shown in Table below.
| Method | Spatial Expert | Temporal Expert | EK100 Verb | EK100 Noun | EK100 Act. |
|-|-|-|-|-|-|
| VideoMAE | - | VideoMAE w/ EK100 | 70.5 | 51.4 | 41.7 |
| AIM-B | CLIP w/ WIT400M | - | 64.8 | 55.5 | 41.3 |
| ST-Adapter-B | CLIP w/ WIT400M | - | 67.6 | 55.0 | - |
| CAST-B | ViT w/ IN-1K | VideoMAE w/ EK100 | 70.9 | 56.8 | 45.5 |
| CAST-B | CLIP w/ WIT400M | VideoMAE w/ EK100 | 72.5 | 60.3 | 48.7 |
In this experiment, we used a CAST variant with a spatial expert pre-trained on ImageNet-1K and a temporal expert pre-trained and fine-tuned on EK100. This variant achieved 45.5% action accuracy on EK100, surpassing AIM (41.3%) and VideoMAE (41.7%). Furthermore, this CAST variant also outperformed ST-Adapter, with accuracy values of 70.9% versus 67.6% for verb predictions and 56.8% versus 55.0% for noun predictions. Importantly, this variant required less pre-training data compared to AIM and ST-Adapter, making it more data-efficient. Although pre-training a ViT on ImageNet-1K took slightly more data and time than VideoMAE (EK100 consists of 11.5M frames while ImageNet-1K consists of 1M frames only, which corresponds to less than 10% of the EK100 dataset), the overall performance gain by the CAST variant justified this additional effort. These results highlight CAST's versatility and effectiveness, leveraging diverse pre-trained weights for competitive performance.
**Fair comparison: computation efficiency**
For a fair comparison with existing methods, we show the number of total parameters, inference-time throughput, and latency in the Table below.
|Method|Spatial|Temporal|Total Parameters(M)|Throughput(V/s)|Latency(ms)|SSV2|SSV2 GFLOPs|K400|K400 GFLOPs|EK100 Verb|EK100 Noun|EK100 Act.|EK100 GFLOPs|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|AIM-B|CLIP w/ WIT400M|-|97|37|31.5|68.1|1238|84.5|1214|64.8|55.5|41.3|2430|
|AIM-L|CLIP w/ WIT400M|-|354|9|122.5|69.4|5754|87.3|5604|-|-|-|-|
|ST-Adapter-B|CLIP w/ WIT400M|-|93|49|22.9|69.3|977|82.5|911|67.6|55.0|-|-|
|ST-Adatper-L|CLIP w/ WIT400M|-|-|-|-|71.9|4124|86.9|4124|-|-|-|-|
|CAST-B|CLIP w/ WIT400M|VideoMAE w/ target data|217|28|48.2|71.6|2346|85.4|5865|72.5|60.3|48.7|2346|
Compared to AIM and ST-Adapter, which employ a single expert model, our proposed CAST does not require additional pre-training data. The efficiency of the temporal expert, VideoMAE, is a key factor. VideoMAE's data-efficient nature is well-documented in its paper. Consequently, when compared to AIM and ST-Adapter, which rely on a sole expert, CAST surpasses AIM-L by +2.3% in performance. Additionally, CAST uses 137 million fewer parameters and computes ~3400 GFLOPs less than AIM-L on the SSV2. Similarly, compared to S-T Adapter-L, CAST achieves comparable performance with only a 0.3 point difference while computes ~1800 GFLOPs less on the SSV2.
**CLS operation**
The CLS operation refers to the process of extracting the CLS token.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Most of the concerns raised in the initial review stage are addressed in the rebuttal. Hence I keep my original reccommendation of weak accept. | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewers for their valuable and constructive comments. We have taken careful consideration of the points raised by each reviewer. We address these concerns comprehensively. Additionally, we have included a supplementary PDF containing figures that further support our responses and explanations for each reviewer's comments.
Pdf: /pdf/b57634542135344bcb82a727c7f274f0886cb4ec.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes a method, namely CAST, for video action recognition based on adapting from large-scale pre-trained models (e.g., CLIP and VideoMAE). The main motivation is to balance and exchange information between spatial and temporal information of two different experts: spatial and temporal. The proposed CAST is an adapter architecture, which adopts CLIP as a spatial expert and VideoMAE as a temporal expert and introduces the cross attention to encourage interaction between these two streams. Experiments are conducted on Epic-KITCHEN-100, Kinetics-400, and Something-Something-v2 with good results compared with current methods. Written presentation is good and mostly easy to follow.
Strengths: * The motivation of balancing between spatial and temporal information of video action recognition makes sense and the proposed architecture of CAST technically sounds.
* Experimental results are strong: (i) good results compared with existing methods; (ii) various ablation to justify the model design choices.
Weaknesses: * Although the good improvements are shown, the downside may be: there may be an unfair comparison with some other adapter-methods, e.g., AIM where they built up on only one expert model. This means CAST needs more data / pretraining time (of CLIP and VideoMEA);
* There is no FLOPs (or runtime) comparison in table 2. As CAST employs a 2-stream architecture, there will be more computation compared with previous methods.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * Some minor comments / typos:
- line 104: "p pixels" -> p x p pixels for more consistent with that in line 107?
- line 106: "BT x N x D" -> should be "B x TN x D" for more consistent with that in line 108?
- Figure 4 caption has no explanation for (c).
- line 204: Figure 4 (c) instead?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The reviewer does not foresee any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the great questions. We address the issues raised by the reviewer below.
**Unfair comparison with adapter-methods**
For a fair comparison, we conducted an additional experiment, as shown in Table 4 of the supplementary material. Below, we have included the table for your convenience.
| Method | Spatial Expert | Temporal Expert | EK100 Verb | EK100 Noun | EK100 Action |GFLOPs/View|
|---------------|------------------|-------------------|----------|----------|----------|-|
| AIM-B | CLIP w/ WIT400M | - | 64.8 | 55.5 | 41.3 |405|
| ST-Adapter-B | CLIP w/ WIT400M | - | 67.6 | 55.0 | - |-|
| CAST-B | ViT w/ IN-1K | VideoMAE w/ EK100 | 70.9 | 56.8 | 45.5 |391|
| CAST-B | CLIP w/ WIT400M | VideoMAE w/ EK100 | 72.5 | 60.3 | 48.7 |391|
In this experiment, we use a variant of CAST with a spatial expert pre-trained on the ImageNet-1K and a temporal expert pre-trained and fine-tuned on the EK100. This variant achieved an action accuracy of 45.5% on EK100, surpassing both AIM (41.3%) and VideoMAE (41.7%). Furthermore, this CAST variant also outperforms ST-Adapter, with accuracy values of 70.9% versus 67.6% for verb predictions and 56.8% versus 55.0% for noun predictions. Importantly, this variant requires less pre-training data compared to AIM and ST-Adapter, making it more efficient in terms of data requirements. It is worth noting that while pre-training a ViT on ImageNet-1K requires slightly more data (EK100 consists of 11.5M frames while ImageNet-1K consists of 1M frames only, which corresponds to less than 10% of the EK100 dataset) and time compared to VideoMAE, the overall performance gain achieved by the CAST variant justifies this additional effort. These results demonstrate the versatility and effectiveness of CAST, as it can leverage pre-trained weights from different sources and still achieve competitive performance.
**FLOPs comparison**
We show FLOPs, the number of frames per clip, the number of test views, and learnable parameters in Tables 8, 9, and 10 of the supplementary material. We found the error in GFLOPs of CAST in the supplementary material and show the fixed GFLOPs in the following Table. The GFLOPs of the other methods compared in the supplementary material are correct.
|Dataset|View|GFLOPs/View|GFLOPs|
|-|-|-|-|
|EK100|2x3|391|2346|
|SSV2|2x3|391|2346|
|SSV2|5x3|391|5865|
CAST shows favorable spatio-temporal balanced understanding performance on the EK100, SSV2, and K400 even with lightweight configurations.
**Tensor shape in lines 106 and 108**
The notation for line 106 and line 108 is different: line 106 is "BT x N x D" because it is an image passing through a spatial path, while line 108 is "B x TN x D" because it is a video passing through a temporal path. The difference occurs because the spatial path treats each frame as an independent image. We show the process of cross-attention of different dimensions of tensors in Supplementary Table 1.
**Typo and missing caption**
Thank you for your suggestions. We will fix the typo and add the missing caption in the final version.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: The rebuttal addressed most of my concerns, I have upgraded my rating to "weak accept". Thank the author(s) for the additional experiments & clarification. | null | null | null | null | null | null |
Geodesic Multi-Modal Mixup for Robust Fine-Tuning | Accept (poster) | Summary: The paper proposes Geodesic Multi-Modal Mixup, which mixes heterogeneous modality embeddings on the hypersphere to generate harder negative samples for the contrastive loss of multi-modal models. Such training scheme improves modality alignment and uniformity. The method was evaluated on various tasks including retrieval, calibration, few-shot classification, and embedding arithmetic.
Strengths: - The multi-modal mixup approach is proposed opportunely.
- The method is comprehensive and straightforward.
- The idea of mixing different modalities is novel and interesting.
- Extensive experiments are impressive with significant gains in several settings.
Weaknesses: 1. I believe the key hyperparameter of the proposed method is lambda, as this should control the "hardness" of the mixed samples. Please provide an analyses (at least a sensitivity test) on this component.
2. In some cases, the authors only report m^2-Mix, while others report only m^3-Mix. It would be great if the authors could provide full tables with both mixup methods, and discuss reasons if there are some differences in performance between the two.
3. Minor presentation issues:
- Some figures have too small a font size; e.g. Figure 1,6.
- Eq. (5) and L202-203 are confusing at first glance. Please use separate lines or a comma in between.
- Figure 6 right's title should be "GMC w/ M^2-Mix"?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Overall, the paper was very well-written with plenty of applications. The mentioned weaknesses are relatively minor, and no critical issue was found on my side.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions, and they are exceedingly helpful for us to improve our paper. We are grateful that you find strength in our idea, actual method, and experiments!
> **Comment1)** I believe the key hyperparameter of the proposed method is lambda, as this should control the "hardness" of the mixed samples. Please provide analyses (at least a sensitivity test) on this component.
To address your suggestion, we present sensitivity analyses (Fig. 1 and Fig. 2 in the global response) on parameters of Beta distribution that contribute to the mixing rate between two embeddings. From the experiments, we would like to remark follows:
* $m^2$-mix generally shows stronger performances when the parameter $\alpha$ lay between 0.1 and 0.5 (U-shaped) rather than larger values (uniform or reversed U-shaped). This is consistent with the well-worked interval that was proposed in the original Mixup [Zhang et al. 2018] paper but different from the selected value by Manifold Mixup (alpha=2.0) [Verma et al. 2019], and implies that the many-to-small strategy is better than the half-to-half mixing strategy in our geodesic multi-modal mixup.
* The linear scheduling variants of Beta parameters (0.1 -> 1.0, 0.1 -> 2.0, and their opposite direction counterparts) achieve promising results in some cases, e.g., 1.0 -> 0.1 and 2.0 -> 0.1 in Stanford Cars. For scheduling variants, it is important to ensure that the shape of Beta distribution approaches to U-shape at the endpoints of learning.
* While scheduling-based variants have shown promising results in some cases, they are not significantly better than the fixed-parameter strategy, so for the sake of simplicity, we maintain the fixed-parameter strategy for the multi-modal mixup.
> **Comment2)** In some cases, the authors only report $m^2$-Mix, while others report only $m^3$-Mix. It would be great if the authors could provide full tables with both mixup methods, and discuss reasons if there are some differences in performance between the two.
While we conducted the experiments (in the global response) only with $m^2$-Mix for this rebuttal due to limited time and resources, we agree with your suggestion and have planned to put both $m^2$-Mix and $m^3$-Mix in all tables consistently on the final version of the paper and provide detailed discussions on differences in performance.
> **Comment3)** Minor presentation issues
Thank you for taking the time to review the details as well as primary messages in our paper, we will reflect on all your remarks to revise our paper.
---
[Zhang et al. 2018] Zhang, Hongyi, et al. "mixup: Beyond Empirical Risk Minimization." International Conference on Learning Representations. 2018.
[Verma et al. 2019] Verma, Vikas, et al. "Manifold mixup: Better representations by interpolating hidden states." International conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: Thank you for your detailed responses. Most of my questions were answered. I'll keep my score. | Summary: The paper proposes three mix-up inspired regularizers for finetuning CLIP as a way to mitigate the visual-text feature gap in the CLIP feature space. The paper proposes some theory to backup why alignment of text feature space and image feature space might be a good idea. Results in cross-modal retrieval and zero-shot classification justify the method.
Strengths: The results seem reasonable at first glance, although they are significantly below state-of-the-art.
The method has some novelty.
Weaknesses: (1) The modality gap is a well-known phenomenon [Liang22] [CyCLIP]. I think the authors misrepresent their contributions by implying that that they discovered this phenomenon by repeated stating in the introduction that "we found .. " that the image and text embeddings occupy separate subspaces in the hypersphere (without citing the prior work) . This phenomenon was studied in [Liang22] and [CyCLIP] .
- Additionally, the findings of [Liang22] are somewhat misrepresented. In Table 1 of [Liang22], those authors find that regularizing the modality gap can increase or decrease the zero-shot performance on downstream tasks. In their discussion, Liang et al. state that a larger gap "may help some fairness and zero-shot learning applications". This is not discussed in the present work.
(2) Theoretical analysis is weak:
- Theorem 4.1 only applies to $d=2$
- Proposition applies as $\tau \rightarrow 0^+$, and in this limit, the sum of CLIP loss and $m^2$-mix regularizer reduces to the negative sum of alignment and uniformity. However, the authors already claimed that vanishing small $\tau$ is a bad thing, since "We speculate this limitation is derived from (1) ... (2) vanished learnable $\tau$ (0.01) in $\mathcal{L}_{CLIP}$." Why then would you prove something about a vanishingly small $\tau$?
(3) It is unclear to me why $m^2$-mix + CLIP is not sufficient? Why do we need the additional regularization? Does $m^2$-mix by itself lead to consistent gains in performance?
(4) Concerns about experiments:
- Why do you use ViT-B/32 instead of ViT-B/16? ViT-B/16 is more common [BLIP][Maple] and many papers don't even present ViT-B/32 results. This makes it hard for readers to compare the proposed modality mix-up to current state-of-the-art. The results in Table 1, 2, and 5 are significantly worse than the results in [BLIP] and [Maple].
- Table 4: Why are there only results for Pets, SVHN, and CLEVR? It is hard to determine whether these results would generalize, perhaps you could run results on more datasets, e.g. the 10 datasets used in [Maple]?
- In Table 4, $m^2$-mix leads to about 0.7 % gain over FT, while a much larger 5.2 % gain comes from $m^3$-mix and tuning $\tau$. Table 5 shows that $m^2$-mix is worse than ZS by 3 %. This severely undermines the claims in this paper, which is that cross-modality mixup (i.e. $m^2$-mix) leads to gains in generalization performance. Where there are gains, most gains seem to come from the intra-modal mix-up regularization.
[Liang22] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625.
[CyCLIP] Goel, Shashank, et al. "Cyclip: Cyclic contrastive language-image pretraining." Advances in Neural Information Processing Systems 35 (2022): 6704-6719.
[BLIP] Li, Junnan, et al. "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation." International Conference on Machine Learning. PMLR, 2022.
[Maple] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Minor:
In my opinion, Figure 2 is misleading. This figure displays the text and image embeddings on opposite sides of a sphere. In reality, the modality gap is not so large that text and image embeddings corresponding to the same image-caption pair fall on opposite sides of the hypersphere. So I don't think Figure 2 is a good illustration of what's really going on.
in theorem 4.1: Why is $\tilde{x} = x_1 + x_2$? shouldn't $\tilde{x}$ by the geodesic mean of the two so that $\tilde{x}$ still lies on the sphere?
Since this is a paper about mixup, the reader might expect FT+mixup to be a reasonable baseline (not feature mixup, but mixup in image input space).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1)** (1) Modality gap is a well-known phenomenon [Liang22] [CyCLIP]. The authors misrepresent their contributions by implying that they discovered this. Additionally, in Tab1 of [Liang22], authors find that regularizing gap can increase or decrease the zero-shot performances and fairness. This is not discussed in the present work.
**A1)** We'd like to thank you for your valuable comments. We already noted the papers and cited them in our paper. As you pointed out, it is also investigated by [Liang22] and others that pre-trained VLMs have separated embeddings. However, we put more of our weight on quantitative uniformity and alignment perspective to analyze the VLM’s embedding space, and we propose a principle solution, $m^2$-mix. Instead, [Liang22] mainly analyzes it via pair-wise cosine similarity and theoretical evidence. While [Goel22] conducted the experiments on uniformity and alignment, they only provided the results on the zero-shot embeddings of pre-trained CLIP and lack the fine-tuned CLIP results which are observed by us. We'll be sure to detail these differences more directly in our paper, and relax our claim about the contribution.
Based on your feedback, we have designed the experiment for a fairness application, and we will attach it to the final version of our work. In terms of zero-shot learning, we confirmed that reducing the modality gap by our $m^2$/$m^3$-Mix has advantages over some cases: zero-shot transfer under distribution shift in Sec 5.4, SIMAT in Sec 5.6, and retrieval in Tab. 3 of global response. In addition, recent work [Ouali23] shows that reducing gap helps to improve base-to-new generalization and zero-shot transfer under distribution shift. These results are somewhat at odds with the finding of [Liang22]. We believe that "relationship between modality gap and downstream performance" is a controversial topic and deserves further research. However, through our experiments, we conjecture that aligning embeddings and simultaneously increasing uniformity is beneficial for a variety of downstream tasks. Therefore, we argue that fine-tuning of VL models needs to consider both alignment and uniformity, and our methods $m^{2}$-Mix achieves high alignment and uniformity with downstream task performance improvements.
> **Q2)** 2-1) Thm 4.1 only applies to $d=2$, 2-2) Prop 4.2 the authors already claimed that vanishing small $tau$ is a bad thing, why then would you prove something about a vanishingly small $\tau$?
4.1. The convolution of two independent von Mises distribution is also von Mises distribution approximately [Marković12], [Mardia00]. But, it has been known that the convolution of two independent von Mises-Fisher distribution does not have an exact form. Therefore, our theoretical analysis focuses on the von Mises distribution settings. We'll include the empirical evidence for von Mises-Fisher distribution as well as von Mises distribution in our final paper
4.2. To conduct theoretical analysis on m2mix based contrastive loss, we assumed the vanishing $\tau$. This is from the consideration of the reality situation because the vanishing $\tau$ is actually the case during fine-tuning of CLIP. Specifically, the small $\tau$ (0.01) of the pre-train CLIP remains small throughout the entire fine-tuning process, unless we manually increase it with a fixed constant value. Through Prop 4.2, we wanted to claim that robust representation is learned by improving alignment and uniformity with m2mix, even in this tau-vanishing situation.
> **Q3)** (3) It is unclear to me why $m^2$-mix+CLIP is not sufficient? Does $m^2$-mix by itself lead to consistent gains in performance?
While the $m^2$-Mix+CLIP induces performance improvements over baseline methods in few-shot adaptation (Tab. 4) and multi-modal classification (Tab.7) in draft, thanks to your suggestion, we felt that more extensive experiments are required to validate the $m^2$-Mix. To this end, we conducted all the additional experiments with $m^2$-Mix, not with $m^3$-Mix. On few-shot classification (Tab.1), image captioning (Tab.2), and retrieval (Tab.3) with 3 different models, our $m^2$-Mix shows consistent improvement over baselines. These results augment the empirical evidence that supports the effectiveness of $m^2$-Mix.
> **Q4)** concern about experiments:
* ViT-B/32: Because our primary goal of this study is aiming at devising a robust fine-tuning method, we conducted the experiments based on ViT-B/32 in our manuscript for fast validation of the proposed method. However, we agree with you that validations in a standardized setting (ViT-B/16) and applicability to SOTA vision-language models are needed. To address this, we accomplished additional experiments on the global response with ViT-B/16 (Tab. 1) and CoCa-ViT-L/14 [Yu22] (Tab. 2 and 3). In Tab. 1, 2, and 3, our m2-mix consistently outperforms compare to the baselines with regard to few-shot classification, captioning, and retrieval. These results verify that our method can also give favorable advantages to fine-tuning SOTA as well as a standard model backbone
* Datasets: Thanks for pointing out where we need to improve. In Tab1. of global response, we additionally provide the results on four more transfer learning benchmark datasets and demonstrate the effectiveness of m2-mix on these datasets. We will cover all remaining datasets in the final version.
* Effect of m2-mix: The amount of improvement depends on the datasets and tasks. m2-mix achieved 1.2% and 1.6% improvements over FT on SVHN and ImageNet in Sec 5.4, and in Tab. 1 of the general response, m2-mix outperforms FT by 8.5%, 1.8%, 4.7%, and 5.2% on CLIP-Air, CLIP-Cars, CyCLIP-Air, and CyCLIP-Cars. Importantly, our m2-mix (without intra-modal mixups) shows consistent improvements on diverse setups.
*due to space limit, we'll address your minor remarks and attach reference via official comments from tomorrow.*
---
Rebuttal Comment 1.1:
Title: response to minor concerns and reference
Comment:
> **minor 1**) In my opinion, Figure 2 is misleading. This figure displays the text and image embeddings on opposite sides of a sphere. In reality, the modality gap is not so large that text and image embeddings corresponding to the same image-caption pair fall on opposite sides of the hypersphere. So I don't think Figure 2 is a good illustration of what's really going on.
**A1)** Figure 2 is the real analysis of CLIP embedding based on DOSNES [Lu et al. 2019]. We adopted DOSNES (a variant of t-SNE for spherical embeddings) to naturally visualize CLIP embeddings rather than the aforementioned ones because CLIP is worked on hyperspherical embedding space. As you mentioned, the actual similarities between multi-modal embedding pairs are not negative, so the real embeddings in the hypersphere could have different patterns from our DOSNES.
However, it is impossible to directly visualize the actual high-dimensional embedding space, and the visualization of the embedding space, which we commonly represent through dimensionality reduction algorithms such as PCA, tSNE, and UMAP, all shows a space that is significantly transformed from the actual embedding space, so we think this is the same as the DOSNES result in Fig. 2. Nevertheless, we understand your concerns and are willing to consider re-visualizing to UMAP or tSNE based on your feedback.
> **minor 2**) in theorem 4.1: Why is $\tilde{x}=x_{1}+x_{2}$? shouldn't $\tilde{x}$ by the geodesic mean of the two so that $\tilde{x}$ still lies on the sphere?
**A2)** Convolution of two independent von Mises distributions is approximately von Mises distribution. Therefore, the summation of x1 and x2 still lie on the hypersphere approximately, in terms of two dimensions. Therefore, our analysis on x1+x2 is still valid in the von Mises distribution case. We will add additional theoretical or empirical results for the high-dimensional case and geodesic mean case in the final version.
> **minor 3**) Since this is a paper about mixup, the reader might expect FT+mixup to be a reasonable baseline (not feature mixup, but mixup in image input space).
**A3)** FT + image mixup: To reflect your constructive feedback, we evaluated two representative image-level mixup strategies (Mixup [Zhang et al. 2018] and CutMix [Yun et al. 2019]) with FT and added them as baselines in Tab. 1 of the global response. Results show that $m^2$-Mix consistently outperforms those baselines. We are planning to extend them to other experiments for our final paper.
---
## Reference
[Liang22] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625.
[Goel22] Goel, Shashank, et al. "Cyclip: Cyclic contrastive language-image pretraining." Advances in Neural Information Processing Systems 35 (2022): 6704-6719.
[Ouali23] Couairon, Guillaume, et al. "Black Box Few-Shot Adaptation for Vision-Language models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2022.
[Marković12] Marković, Ivan, and Ivan Petrović. "Bearing-only tracking with a mixture of von Mises distributions." 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012.
[Mardia00] Mardia, Kanti V., Peter E. Jupp, and K. V. Mardia. Directional statistics. Vol. 2. New York: Wiley, 2000.
[Yu22] Jiahui Yu, et al. "CoCa: Contrastive Captioners are Image-Text Foundation Models". Transactions on Machine Learning Research. (2022).
[Mardia et al. 2000] Mardia, Kanti V., Peter E. Jupp, and K. V. Mardia. Directional statistics. Vol. 2. New York: Wiley, 2000.
[Zhang et al. 2018] Zhang, Hongyi, et al. "mixup: Beyond Empirical Risk Minimization." International Conference on Learning Representations. 2018.
[Yun et al. 2019] Yun, Sangdoo, et al. "Cutmix: Regularization strategy to train strong classifiers with localizable features." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
[Lu et al. 2019] Lu, Yao, Jukka Corander, and Zhirong Yang. "Doubly stochastic neighbor embedding on spheres." Pattern Recognition Letters 128 (2019): 100-106.
---
Rebuttal 2:
Title: rebuttal response
Comment: I thank the authors for the detailed rebuttal. After carefully reading through the rebuttal, I still do not think this manuscript is ready for publication at Neurips and may benefit from another round of review. To summarize, the following weaknesses remain unaddressed:
(1) *Overclaim in abstract and intro*: (I note that ECSD agrees with this). The wording in the abstract and intro attributes the discovery of the modality gap phenomenon to the authors of the current work, when this is clearly a known phenomenon. Further, the authors state as their first contribution: "(1) We found that CLIP has a bipartite embedding structure " , when this structure has clearly already been explored in prior work such as [Liang22] (I am not an author of any such prior work). In my opinion, this is a severe exaggeration of the authors' contributions and warrants rejection.
I understand that relevant work is cited, and there is some difference between the current work and prior work. But that doesn't give the authors freedom to claim credit for prior discoveries.
(2) *Weak theory*: Theorem one applies only to two dimensions (a circle). Theorem 2 assumes vanishing tau, when the authors even show in their own empirical results that increasing tau from 0.01 to 0.05 is beneficial to few-shot accuracy. Aside from the unrealistic assumptions, the theorems only show that the proposed method maximizes alignment and uniformity. The theory does not say anything about improving few-shot generalization. The authors point to [Ouali23] in their rebuttal, but this is an empirical work. To the best of my knowledge, there is no theory that relates reducing modality gap to few-shot generalization. As a thought experiment, finetuning with cross-entropy will maximize alignment and uniformity and reduce the modality gap to zero, because the prototypes will become evenly distributed over the sphere and the training samples will become clustered their respective prototypes. However, in few-shot setting, it is often desirable to stop early (before alignment and uniformity are maximized) to prevent overfitting.
The proposed theorems do not provide insight into how the proposed method generalizes better.
(3) *Experiments*: The experiments included in the rebuttal presents drastically different results from the original manuscript. Incorporating these into the manuscript would require a some re-writing of the discussion. The authors are using a non-standard evaluation setting, and they re-run all baselines. It is not possible to compare the numbers reported by the authors with prior work. This may be fine, but it's impossible to judge how well the baseline is tuned (e.g. for few-shot finetuning using cross-entropy, it's fairly standard to tune the temperature, learning rate, batch size, how long model is trained, and how many layers are frozen/unfrozen).
---
Rebuttal Comment 2.1:
Title: response to response of 9Koy [1/2]
Comment: Firstly, let us express our appreciation to you. Through the rebuttal and discussion, we deliberate again on the position and implication of our work. We want to reply to the remaining weaknesses you raised as follows:
* Overclaim in abstract and intro
* We’d like to remark that ours and [Liang et al. 2022] are concurrent works in terms of arXiv preprint, which is why we expressed our findings that way in the paper. We'll revise the expression in the final version.
* But as you already understood, our finding has differences from that of [Liang et al. 2022]: 1) quantitative analysis on uniformity and alignment; 2) such analysis on fine-tuned CLIP as well as pre-trained CLIP. Therefore, there is a new and crucial message that is lacking in other works: “After standard fine-tuning on a downstream dataset, CLIP still has a separated embedding space with poor uniformity and alignment”, which motivates us to devise the multi-modal mixup for better fine-tuning. To avoid misunderstanding, we’ll revise the expressions in a way that emphasizes the differences from other studies and puts more weight on our findings.
* Weak theory
* _Theorem4.1_) We first wanted to verify our method theoretically in a simple setup, and then empirically demonstrate it further in complex setups. While Theorem 4.1 presents the theoretical guarantee in a two-dimensional setting, thorough numerical analyses provided in supplementary material support the validity of our method in high-dimensional settings.
* _Proposition 4.2_) As we explained in the rebuttal, we assume the vanishingly small $\tau$ that reflects the case of standard CLIP fine-tuning without manual $\tau$ engineering. While the manually increased $\tau$ helped generalization in some of our experiments, we analyzed the theoretical behavior in the default setting so that it could have broad implications for other works that do not conduct manual $\tau$ tuning. In the experiments of rebuttal and Sec 5.4 of the manuscript, $m^2$-Mix without $\tau$ engineering already shows consistently performs better than vanilla fine-tuning on various tasks, and Proposition 4.2 could justify such empirical successes.
* _Theoretical analysis on generalization_) It is worth noting that the first work [Wang and Isola 2020] proposing the uniformity-alignment concept in uni-modal settings only provided theoretical results on the connection of contrastive loss and uniformity-alignment, and its relationship to generalization performance is empirically validated in that paper. Based on that work, there have been numerous follow-up works (>1000) including theoretical [Huang et al. 2023] and empirical [Pu et al. 2022] perspectives. Through Prop 4.2 in this work, we disclose the limiting behavior of multi-modal contrastive loss with regard to uniformity-alignment for the first time. Thus, we believe that Prop 4.2 has distinct value itself as a first step towards a theoretical connection of uniformity-alignment optimization in multi-modal contrastive learning, and expect that this analysis can be used as a strong bridge to further research.
**(This is [1/2] comment. Due to space limit, the reference and responses to concern about experiments are presented in the next [2/2] comment)**
---
Reply to Comment 2.1.1:
Title: response to response of 9Koy [2/2]
Comment: (_continued response_)
* experiments
* For the few-shot classification task, it was necessary to present experiments with a different dataset and model backbone than in the manuscript in the rebuttal to address your and other reviewers' concerns. We will revise the content of Section 5.4 based on the results of this rebuttal, and as you say, we will redo the experiments that need to be redone.
* We’d like to explain more about the experimental setup of CLIP-ViT-B/16 in few-shot classifications. Since one of our main focuses in this paper is to verify that robust adaptation is possible for a few training samples, we adopted a few-shot setup [Zhou et al. 2022] instead of experimenting with a full dataset setup, which is standard in existing CLIP fine-tuning papers [Worstman et al. 2022] [Kumar et al. 2022], so numerical comparison with previous studies is difficult. All methods except MaPLe were trained on 16-shot training samples for 200 (eurosat) and 3200 (rest) iterations per dataset, using AdamW optimizer (default parameter) and cosine learning rate scheduler, following the fine-tuning settings of the [Worstman et al. 2022] paper, and the sweep ranges of learning rate and weight decay were set to {1e-6, 3e-6, 5e-6, 7e-6, 1e-5} and {1e-1, 1e-2, 1e-3} respectively. In the experiments in rebuttal, we did not fix the temperature parameter. Except for MaPLe, all methods trained the entire model without frozen layers. Our $m^2$-Mix method was also run with the same number of iterations and the same parameter sweep range.
* For MaPLe and MaPLe+$m^2$-Mix, the image encoder and text encoder were frozen and only the prompt learner parameter was learnt for 1600 (eurosat) and 16000 (rest) iterations. Following the settings in the paper, we adopted SGD optimizer with cosine lr scheduler and explored learning rate among {0.0005, 0.001, 0.0035, 0.005}, including the proposed learning rate of 0.0035, and the weight decay parameter was also explored {0.0, 0.1} including the proposed weight decay of 0.0.
---
## Reference
[Liang et al. 2022] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625.
[Wang and Isola 2020] Wang, Tongzhou, and Phillip Isola. "Understanding contrastive representation learning through alignment and uniformity on the hypersphere." International Conference on Machine Learning. PMLR, 2020.
[Huang et al. 2023] Huang, Weiran, et al. “Towards the Generalization of Contrastive Self-Supervised Learning.” International Conference on Learning Representations. 2023.
[Zhou et al. 2022] Zhou, Kaiyang, et al. "Learning to prompt for vision-language models." International Journal of Computer Vision 130.9 (2022): 2337-2348.
[Worstman et al. 2022] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[Kumar et al. 2022] Kumar, Ananya, et al. "Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution." International Conference on Learning Representations. 2022.
[Ouali et al. 2023] Couairon, Guillaume, et al. "Black Box Few-Shot Adaptation for Vision-Language models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[Pu et al. 2022] Pu, Shi, Kaili Zhao, and Mao Zheng. "Alignment-uniformity aware representation learning for zero-shot video classification." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. | Summary:
This paper studies a data augmentation method for effectively finetuning multi-modal model (i.e., CLIP). To address poor alignment of image and language space, the authors mix the embeddings of image and text while considering the geometry of the hypersphere. The authors give theoretical analysis as well as extensive experiments on several tasks including retrieval, calibration, few-/zero-shot classification.
Strengths: (1) As a data augmentation strategy for contrastive learning, I think that the proposed method has novelty to some extent, different from previous arts such as i-Mix and Un-Mix. Specifically, the proposed data augmentation is performed in the embedding spaces while considering the geometry of the hyperspheres, in contrast with i-Mix and Un-Mix which mix raw images.
(2) The experiments are diverse and extensive and can validate the effectiveness of the proposed method. In particular, the performance on cross-modal retrieval performs significantly better than the competing methods.
Weaknesses: (1) The first contribution regarding poor alignment of the two modal space of CLIP seems to be overclaimed.
As far as I know, Liang et al. [24] has clearly disclosed the modality gap in multi-modal contrastive representation learning, clarifying the two modalities in CLIP (and other models) are separately clustered (restricted to narrow cones) and have poor alignment and uniformity. What is difference between the authors finding and [24]? I expect an in-depth discussion in this respect and relax the claim if appropriate.
(2) Mixup-like data augmentation in feature space has been proposed in previous works including [Ref01] and [Ref02]. I suggest authors explicitly discuss the connection and difference from them.
[Ref01] Terrance DeVries and Graham W Taylor. Dataset augmentation in feature space. arXiv preprint arXiv:1702.05538, 2017a.
[Ref02] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In ICML, 2019.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see “Weaknesses” section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The authors have adequately discussed the limitations of the proposed method in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your productive feedback on our paper and your remark on the strength of our methodology and experiments.
> **Q1)** The first contribution regarding poor alignment of the two modal space of CLIP seems to be overclaimed. As far as I know, Liang et al. [24] has clearly disclosed the modality gap in multi-modal contrastive representation learning, clarifying the two modalities in CLIP (and other models) are separately clustered (restricted to narrow cones) and have poor alignment and uniformity. What is difference between the authors finding and [24]? I expect an in-depth discussion in this respect and relax the claim if appropriate.
**A1)** As you pointed out, it is already well clarified by [Liang et al. 2022] that pre-trained vision-language models have separated embeddings for each modality. While their finding is mainly based on the pair-wise cosine similarity and theoretical analysis, we weigh our attention on quantitative measurements of uniformity and alignment (Fig 2. of our manuscript), which are lacking in the previous work. While the author of CyCLIP [Goel et al. 2022] analyzed the uniformity and alignment of CLIP embeddings, they only considered the zero-shot embedding of CLIP. Meanwhile, our observation includes the fine-tuned CLIP (which is lacking in the CyCLIP paper) as well as pre-trained CLIP. We will be sure to detail these differences more directly in our paper, and soften our claim about the finding on separated embeddings in pre-trained CLIP. Thanks for the great feedback!
> **Q2)** Mixup-like data augmentation in feature space has been proposed in previous works including [Ref01] and [Ref02]. I suggest authors explicitly discuss the connection and difference from them. [Ref01] Terrance DeVries and Graham W Taylor. Dataset augmentation in feature space. arXiv preprint arXiv:1702.05538, 2017a. [Ref02] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In ICML, 2019.
**A2)** Thank you for suggesting valuable related works. Our methods and the related works, [Ref01] [Ref02], exploit the data augmentation in feature space, not input space. However, there are two contributions to our methods. First, we propose a hidden feature mixup method on heterogeneous modality. When there are text and image feature spaces, our $m^{2}$-Mix explores inter-relationship by mixing image and text features, while the previous methods explore intra-relationship by mixing image features themselves. Second, our methods interpolate between data points on the hypersphere, not Euclidean space. As shown in Equation (4) of the main paper, we interpolate the data points by geodesic. As a result, the augmented features by our methods lie on the hypersphere. Self-supervised methods project the embedding on the hypersphere; therefore, it is important to guarantee that the augmented features are on the hypersphere. Table 6 indicates the importance of geodesic interpolation.
In other words, our method is a more generalized Mixup that handles the multimodal and hypersphere spaces. From Equation (4) of the main paper, our method degenerates to the traditional Mixup or manifold Mixup if we change a and b as the same domain (text or image) and its coefficient as just $\lambda$ instead of the sinusoidal function with $\lambda$ and $\theta$.
---
[Liang et al. 2022] Liang, Victor Weixin, et al. "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning." Advances in Neural Information Processing Systems 35 (2022): 17612-17625.
[Goel et al. 2022] Goel, Shashank, et al. "Cyclip: Cyclic contrastive language-image pretraining." Advances in Neural Information Processing Systems 35 (2022): 6704-6719.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses. The authors alleviated but not fully addressed my concern about the first contribution; I also notice that Reviewer 9Koy has the same concern. As such, I keep my original rating (5) unchanged. | Summary: This paper proposes a new method for robust fine-tuning called Geodesic Multi-Modal Mixup. The method improves the uniformity and alignment in multi-modal learning, thereby enhancing the performance of downstream tasks. Previous research has shown promising performance of large-scale pre-trained models on various downstream tasks, but the analysis of embedded representations for multi-modal tasks is still insufficient, and the transferability of cross-modal tasks needs to be improved. The motivation of this paper is to address these limitations by introducing the Geodesic Multi-Modal Mixup method, which demonstrates improved performance on various downstream tasks through experiments.
Strengths: 1. The paper provides a new perspective for understanding multi-modal embeddings in terms of uniformity and alignment.
2. The paper proposes a new end-to-end fine-tuning method for improving the robustness and transferability of multi-modal representations for downstream tasks.
3. The proposed method, Geodesic Multi-Modal Mixup, is shown to learn a more robust representation and provide improved performance on various downstream tasks such as retrieval, classification, and structure-awareness.
Weaknesses: 1. The paper highlights that the analysis of learned embeddings and transferability for cross-modal tasks has not been explored well, i.e., image captioning, visual questions answering tasks, and so on.
2. The multiple multi-modal mixup seems too complex, would it cast shadow on the learning and generalization ability? More multi-modal backbones like the variants of CLIP or other vision-language models like DeCLIP, FILIP, CLOOB, CyCLIP. Since the approach is agnostic to the architecture and models, I would encourage the authors to perform additional experiments to demonstrate the effectiveness of the proposed approach on other vision-language models.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see weakness above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: see weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your valuable feedback and that you find value in our work. We agree with your concerns and have conducted additional experiments based on your feedback.
---
> **Q1)** The paper highlights that the analysis of learned embeddings and transferability for cross-modal tasks has not been explored well, i.e., image captioning, visual questions answering tasks, and so on.
> **Q2)** The multiple multi-modal mixup seems too complex, would it cast shadow on the learning and generalization ability? More multi-modal backbones like the variants of CLIP or other vision-language models like DeCLIP, FILIP, CLOOB, CyCLIP. Since the approach is agnostic to the architecture and models, I would encourage the authors to perform additional experiments to demonstrate the effectiveness of the proposed approach on other vision-language models.
**A2)**
Despite its effectiveness, as you noticed, $m^3$-Mix has multiple components contributing to learning and generalization, so it seems hard to clear out the core factor of learning and generalization. To clearly determine the effect of our main contribution ‘multi-modal mixup’, we perform all additional experiments in this rebuttal with $m^2$-Mix.
**A1, 2)**
In our manuscript, we performed the following tasks: (uni-modal) few-shot image classification under general and distribution shift settings; (multi-modal) retrieval, sentiment classification, and embedding arithmetic. To further validate our method, we evaluate our method on the image captioning task with CoCa [Yu et al. 2022] model. We fine-tune CoCa on MS COCO dataset via CoCa objective (contrastive and captioning loss), and showcase the results of image captioning and retrieval in Table 2. and Table 3. of the global response, respectively. Our $m^2$-Mix consistently outperforms the standard fine-tuning method with regard to almost all metrics of interest. From this, we confirm that our $m^2$-Mix not only benefits the discriminative tasks but also the generative task.
**A2)**
Besides, in Table 1., we evaluate our method on CyCLIP [Goel et al. 2022] for few-shot image classification. Here, $m^2$-Mix also successfully improves performance across all considered datasets. This further demonstrates that our $m^2$-Mix is a versatile fine-tuning approach that can be seamlessly integrated across VL models (CLIP, CyCLIP, CoCa, and so on).
---
[Yu et al. 2022] Jiahui Yu, et al. "CoCa: Contrastive Captioners are Image-Text Foundation Models". Transactions on Machine Learning Research. (2022).
[Goel et al. 2022] Goel, Shashank, et al. "Cyclip: Cyclic contrastive language-image pretraining." Advances in Neural Information Processing Systems 35 (2022): 6704-6719.
---
Rebuttal 2:
Title: reviewer yWq1
Comment: Dear Reviewer yWq1,
Could you please take a look at our rebuttal and let us know if the additional experiments and discussions address your concerns? We really appreciate your time and effort in reviewing our paper.
Besides, we are open to conducting more experiments and discussions if the reviewer provides any further suggestions.
Yours sincerely,
Authors of paper 3682 | Rebuttal 1:
Rebuttal: # Global Response
We thank the reviewers for taking the time to review our paper and for your valuable feedback. We have carefully considered your comments and reflected in our response. In this global response, 1) we first provide a brief review of our draft and then 2) provide a summary of additional statements that we’ve made to address your feedback and suggestions during the rebuttal period. Finally, 3) we elaborate on settings of additional experiments(**Tables and Figures are attached to pdf**).
- - -
## 1. Review of research highlights
* We observed that both the pre-trained CLIP and its naively fine-tuned counterpart have separated embedding space for each modality and show quantitatively poor alignment and uniformity scores, which may limit the transferability of learned embedding.
* From our findings, we devised a new fundamental approach ‘geodesic multi-modal mixup ($m^2$-Mix)’-based contrastive learning for robust fine-tuning to enhance the alignment and uniformity of embeddings.
* We provide two theoretical analyses of the proposed method to support its validity
* hardness-guarantee of the generated sample by $m^2$-Mix, which is crucial for the success of contrastive learning.
* asymptotic behavior of standard CLIP loss with $m^2$-Mix that maximizes the alignment and uniformity with few assumptions.
* Through extensive experiments on retrieval, calibration, few-shot adaptation, zero-shot transfer under distribution shift, multi-modal classification under modality missing, and embedding arithmetic, we demonstrate that the proposed method effectively improves performance over baseline methods.
## 2. Additional remarks from the rebuttal period
During the rebuttal period, we conducted several additional experiments to address the reviewers' concerns and suggestions. Here, we provide our new findings from the results of extra experiments in summary:
* (9Koy) Without the aid of intra-modal mixup regularization such as V/L/VL-Mix, our $m^2$-Mix independently yields consistent performance gains on a variety of tasks, and in some cases the gains are significant (+5~8% Acc).
* (9Koy) We evaluate our $m^2$-Mix with ViT-B/16 (not B/32 of draft) on four additional transfer learning benchmarks, and $m^2$-Mix achieves consistent performance improvements.
* (yWq1) Beyond the basic CLIP, we demonstrated that our $m^2$-Mix is helpful to other recent vision-language models (VLM) such as CyCLIP [Goel et al. 2022] and CoCa [Yu et al. 2022].
* (yWq1) Our method gives an advantage not only for retrieval, multi-modal classification, and embedding arithmetic, but also for generative tasks such as image captioning.
* (uSAw, 9Koy) $m^2$-Mix brings benefits to large-scale state-of-the-art VLM (e.g., CoCa-Large that has 9 times more parameters compared to CLIP-Base) as well as standard CLIP.
* (iPuA) We tweaked the parameters of the Beta distribution and found that our $m^2$-Mix performs optimally between 0.1 and 0.5, just like the standard Mixup. This suggests that when choosing a mixed sample, mixing one side more than the other, rather than half-and-half, leads to more stable learning, which contributes to better generalization performance.
## 3. Experimental setup
We evaluated our method on three tasks (few-shot classification, image captioning, and retrieval), and the experimental setup for each task is as follows:
### 3.1. few-shot classification
We consider four standard benchmark datasets (EuroSAT, FGVC Aircraft, UCF101, and Stanford Cars) and two VLMs with different backbone and pre-trained weights (CLIP-ViT-B/32 and CyCLIP-ResNet50 with official pre-trained checkpoints) that are not included in our manuscript.
We adopt the same setting with Sec 5.4 of our draft to implement the contrastive learning-based fine-tuning on these image-label paired datasets, i.e., promptizing a label with a common context “a photo of {classname}” and regard this as a caption per images. As baselines, we consider zero-shot inference, vanilla fine-tuning with traditional image space mixups (Mixup [Zhang et al. 2018] and Cutmix [Yun et al. 2019]), robust fine-tuning method (WiSE-FT [Wortsman et al. 2022]), and a state-of-the-art (SOTA) parameter-efficient fine-tuning method (MaPLe [Khattak et al. 2023]). All experiments are done under a 16-shot training set and full test set, and top-1 accuracy is reported.
### 3.2. image captioning and retrieval
We adopt CoCa-ViT-L/14 with pre-trained checkpoint `laion2b_s13b_b90k` from the OpenCLIP library as our VLM to demonstrate the effectiveness of our method on a large-scale SOTA model. We fine-tune CoCa on MS COCO training set with 1) only captioning loss (this is a strategy that CoCa’s authors adopted), 2) contrastive loss + captioning loss, and 3) contrastive loss + $m^2$-Mix + captioning loss. After training, we evaluate the models in terms of captioning metrics such as BLEU4, CIDEr, etc (on COCO) and retrieval recalls (on COCO and Flickr30k).
- - -
[Goel et al. 2022] Goel, Shashank, et al. "Cyclip: Cyclic contrastive language-image pretraining." Advances in Neural Information Processing Systems 35 (2022)
[Yu et al. 2022] Jiahui Yu, et al. "CoCa: Contrastive Captioners are Image-Text Foundation Models". Transactions on Machine Learning Research. (2022).
[Zhang et al. 2018] Zhang, Hongyi, et al. "mixup: Beyond Empirical Risk Minimization." International Conference on Learning Representations. 2018.
[Yun et al. 2019] Yun, Sangdoo, et al. "Cutmix: Regularization strategy to train strong classifiers with localizable features." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
[Wortsman et al. 2022] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[Khattak et al. 2023] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Pdf: /pdf/46bbf2162457da64978b8f11663abc20187df205.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper addresses the problem of improving vision-language representation learning using feature-space augmentation. The authors claim that approaches such as CLIP have led to poor alignment between text features and image features, and the space between them lacks uniformity. They propose an approach called Geodesic multimodal mix-up to mix vision and language embeddings in order to generate hard negative samples. These hard negative samples, along with the original negative and positive samples, are then used in existing contrastive algorithms. The authors provide a theoretical guarantee of the difficulty of these examples. They evaluate their approach on several common benchmarks such as MS-COCO/Flickr30K (retrieval) and Pets/SVHN/CLEVR (classification). Compared to standard mix-up techniques such as linear combination and re-normalization, their approach consistently shows improvements across different settings.
Strengths:
The paper is clearly written and easy to follow. The approach is simple but intuitively makes sense, i.e., mixing up embeddings while still residing on the normalized surface should be better than linear combination and re-normalization. The authors also provided a sound theoretical justification for their approach. The gains on the chosen benchmark over the previous approaches are decent.
Weaknesses: The improvement was shown in vanilla settings, for example, with basic CLIP and small datasets. It is not clear if such a mix-up scheme holds up in other large-scale settings where the data distribution might be different. Also, reported numbers are far from SOTA.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate you for your attention and valuable comment. We agree with your concerns and have conducted additional experiments to address them.
---
> **Q)** The improvement was shown in vanilla settings, for example, with basic CLIP and small datasets. It is not clear if such a mix-up scheme holds up in other large-scale settings where the data distribution might be different. Also, reported numbers are far from SOTA.
**A)**
In this work, we focus on devising a robust fine-tuning method for pre-trained VL models. Here, we believe that the method (as a ‘robust’ fine-tuning approach) should have the following desirable properties: 1. Achieving stable performance on the data-scarce regime (Sec 5.4); 2. Robustness under distribution shift (Sec 5.4); 3. Robustness under modality missing (Sec 5.5); 4. calibration (Sec 5.2), and 5) preserving semantic embedding structures (Sec 5.6). Therefore, we spared much of our attention to these components rather than the scale of experiments or SOTA performance, and we demonstrated that our method achieve all the above properties from each section.
Nonetheless, as you pointed out, it is crucial to determine whether our method is effective for large-scale SOTA VL backbones beyond the basic CLIP. To this end, we evaluate our method on the one of SOTA VL models, CoCa [Yu et al. 2022], with CoCa-Large configuration governing 787M parameters which are 9 times larger than the CLIP-ViT-Base setting in our manuscript. We fine-tuned the pre-trained CoCa model on the MS COCO dataset via CoCa’s pre-train objective with $m^2$-Mix-based contrastive loss. In Table 2. and Table 3. of the global response, our $m^2$-Mix consistently outperforms compare to the original CoCa fine-tuning baselines with regard to captioning evaluation metrics and recalls of retrieval. These results verify that our method can give favorable advantages on large-model fine-tuning in terms of both generative and discriminative tasks.
Moreover, we compare our method with SOTA parameter-efficient fine-tuning approaches such as MaPLe [Khattak et al. 2023] both in our manuscript (Sec 5.4) and in this rebuttal (Table 1.). In these experiments, our method constantly improves performance, so it demonstrates its effectiveness and versatility as a plug-and-play module.
---
[Yu et al. 2022] Jiahui Yu, et al. "CoCa: Contrastive Captioners are Image-Text Foundation Models". Transactions on Machine Learning Research. (2022).
[Khattak et al. 2023] Khattak, Muhammad Uzair, et al. "Maple: Multi-modal prompt learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
---
Rebuttal 2:
Title: reviewer uSAw
Comment: Dear Reviewer uSAw,
Could you please take a look at our rebuttal and let us know if the additional experiments and discussions address your concerns? We really appreciate your time and effort in reviewing our paper.
Besides, we are open to conducting more experiments and discussions if the reviewer provides any further suggestions.
Yours sincerely,
Authors of paper 3682 | null | null | null | null | null | null |
Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity | Accept (oral) | Summary: This paper aims to reconstruct high-quality video from brain activity. A novel model called MinD-Video is proposed, which could learn spatiotemporal information from continuous fMRI data.
Strengths: 1. It is quite interesting to reconstruct videos according to human brain regions' actions, Although the task is hard.
2. I believe all the newest methods are worth being tried to achieve fMRI data reconstruction, including Stable Diffusion and its video version.
3. The latent alignment and spatiotemporal attention are well-designed in this paper.
Weaknesses: 1. In Figure 4, we can see that Wen (2018) could reconstruct the shape and Kupershmit (2022) could reconstruct the texture. While videos generated by MinD-Video lack shape and texture. So I think previous works could affect more on the Video Generation part of MinD-Video.
2. The quantitative results are not sufficient. The metric values are not compared with other methods, and the gap between some experimental settings is limited.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Please address my above concerns.
2. Do you think the Video Generation part should have controllability? I mean the current Video Generation part has no ability to reconstruct a video that is the same as source video, which would be the upper bound of the overall performance.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the recognition of our contribution and the invaluable and constructive comments. Our point-by-point response is provided as follows.
> 1. In Figure 4, we can see that Wen (2018) could reconstruct the shape and Kupershmit (2022) could reconstruct the texture. While videos generated by MinD-Video lack shape and texture. So I think previous works could affect more on the Video Generation part of MinD-Video.
**Response:** Thank you for this insightful comment. Pixel level and semantic level decodings recover visual stimuli from two different perspectives, where the trade-off between fidelity and meaningfulness needs to be considered. In this work, **we prioritize the recovery of visual semantics** in fMRI, which is crucial for understanding the complex mechanism of human perception.
**We recognize the significance of pixel and texture level information**, which contributes to generating visually closer results to the ground truth. For instance, leveraging results from previous methods could serve as a valuable guidance or initial starting point for our generative process.
Nonetheless, our aim is to establish a foundation for future research that integrates both pixel-level features and visual semantics in this particular task. By doing so, we anticipate that future studies can further refine and enhance the decoding process, leading to even more comprehensive and accurate outcomes.
> 2. The quantitative results are not sufficient. The metric values are not compared with other methods, and the gap between some experimental settings is limited.
**Response:** We thank the reviewer for the constructive suggestion. **Limited by the publicly available video samples and codes** for reproduction in the literature, we had some difficulties in comparing numerically with other methods. In the previous literature, Kupershmidt, 2022 [12] and Wen, 2018 [10] **have only released part of their reconstructed videos**. Also, because their released videos are **basead on different frame rates from ours** (1Hz in [12], 0.5Hz in [10], 3Hz in ours), we did not perform the identification tests on those videos, which might not be a fair comparison. Instead, we relied on **a commonly available metric reported in their papers**, SSIM. This metric was calculated quantitatively **across all testing samples** according to their papers. We did the same for our approach in the original paper.
Nevertheless, as recommended, we have now decided to perform the identification test on those **incomplete samples** only (released by previous work) and compare the results across different methods (including ours). Please see the table below. Our method outperformed the two previous methods in identification test accuracy. We will include these results and **possible limitations in the revised manuscript**.
| | 50-Way, Top-1 Accuracy | 50-Way, Top-1 Accuracy |
|------------------------------------ |-------------------------------------|------------------------------------|
| | Image Identification Tests | Video Identification Tests |
| Ours | **0.195 +- 0.016** | **0.265 +- 0.02** |
| Kupershmidt, 2022 | 0.179 +- 0.017 | 0.238 +- 0.02 |
| Wen, 2018 | 0.07 +- 0.01 | 0.166 +- 0.016 |
In order to strengthen our experimental settings, we performed extra ablation studies on different components of our designs on top of the existing ablation experiments. The first experiment involves **excluding the MBM pre-training**, while the second experiment **removes both the MBM pre-training and the contrastive training** from the proposed method. As shown in the table below, both the MBM pre-training and the contrastive training are critical to getting the best results, and removing any of them will incur a significant drop in performance.
| | 50-Way, Top-1 Accuracy | 50-Way, Top-1 Accuracy | |
|-------------------------------------|-------------------------------------|------------------------------------|---------|
| | Image Identification Tests | Video Identification Tests | SSIM |
| Full Model | **0.172 +- 0.01** | **0.202 +- 0.02** | **0.171** |
| w/o MBM, w/ Contrastive | 0.122 +- 0.012 | 0.169 +- 0.015 | 0.143 |
| w/o MBM, w/o Contrastive | 0.076 +- 0.008 | 0.138 +- 0.013 | 0.123 |
> 3. Do you think the Video Generation part should have controllability? I mean the current Video Generation part has no ability to reconstruct a video that is the same as source video, which would be the upper bound of the overall performance.
**Response:** Thank you for your useful comment. Indeed, we agree that incorporating some form of controllability in the Video Generation part of our model could enhance the reconstruction of low-level image features like shape, texture, and location, bringing us closer to the source video. In fact, we believe that **controllability is a critical feature** for this kind of research, which is worth further research. We will include this part as future work in our revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's elaborate response, and all my concerns have been well addressed. So I raise my rating as "Weak Accept".
---
Reply to Comment 1.1.1:
Comment: Many thanks for your support! We truly appreciate your precious time and valuable suggestions. | Summary: This paper focuses on the task of reconstructing human vision from brain activities. The authors propose MinD-Video that learns spatiotemporal information from continuous fMRI data of the cerebral cortex progressively through masked brain modeling, multimodal contrastive learning with spatiotemporal attention, and co-training with an augmented Stable Diffusion model that incorporates network temporal inflation. And the results are evaluated with semantic and pixel metrics at video and frame levels.
Strengths: 1. The authors provide both quantitative and qualitative results, and also provide some interpretable visualization results for demonstration.
2. The proposed method seems to perform better than previous non-diffusion methods.
Weaknesses: 1. The pre-training of fMRI encoder is composed of generative and contrastive objectives, which is very similar to previous works[1] that use masked contrastive pre-training in diffusion models. And the overall framework is a combination of existing methods or tricks (masked contrastive pretraining[1]+spatio-temporal attention (ST-Attn) mechanism[2]+stable diffusion[3]). Thus the novelty of this work is limited with respect to diffusion model field.
2. The proposed method contains two many tricks and modules, and causes no central point with respect to its technical contributions, the overall presentation is hard to follow.
3. I have reproduced the method, the results seem not as good as the demonstration in the paper and there sometimes exists a style inconsistency between GTs and generated results. The authors seem to cherry-pick the generated videos.
[1] Yang L, Huang Z, Song Y, et al. Diffusion-based scene graph to image generation with masked contrastive pre-training[J]. arXiv preprint arXiv:2211.11138, 2022.
[2] Wu J Z, Ge Y, Wang X, et al. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation[J]. arXiv preprint arXiv:2212.11565, 2022.
[3] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: 1. The proposed method is of limited novelty, the authors need to rethink their main contributions.
2. The writing needs to be improved, and the central point needs to be emphasized.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 1 poor
Limitations: See weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your time and effort in reviewing our work. We also appreciate your interest in our work and the useful suggestions. Our point-by-point responses are as follows.
> 1. the novelty of this work is limited with respect to diffusion model field.
> 2. The proposed method is of limited novelty.
**Response:** We would like to take this opportunity to emphasize that though our method builds upon the established techniques, the novelty of our work is not in implementing these techniques per se. Instead, it lies in applying these techniques in a new and challenging domain: learning dynamic brain activities and reconstructing visual stimuli from the brain. Our work is at the intersection of neuroscience and CV, where the focus is not solely on inventing new tricks or structures for the diffusion model. Instead, **the main aim is to tackle the unique challenges of fMRI** and make proper methodological adjustments to adapt state-of-the-art generative models to our specific task. This has been recognized by the other four reviewers.
Thanks for providing the additional reference. However, [1] provided a learning method for **scene graphs**, which is **an entirely different modality** from fMRI or any other brain recordings. Even though our method shares a similar philosophy with [1], this does not negate the novelty of our work. **The specific way of problem formulation, data modeling, and problem-solving is also an essential part of research novelty. Besides, the masking, contrastive, and generative objectives are standard techniques in representation learning.**
Furthermore, we would like to highlight a few key differences.
- We learn features of **dynamic fMRI with masking**, while [1] learns a **static scene graph without masking**.
- Our work focuses on learning the **biological features** of fMRI, whereas [1] aims to learn the **geometric information** in the scene graph.
- We must consider the **hemodynamic response**, a unique challenge when matching dynamic fMRI to videos. In contrast, the scene graph in [1] is static and matched to a static image.
- We used a **3-modality** contrast among fMRI, video, and text to create a shared multi-modality space. While [1] used a **2-modality** contrast between scene graphs and images to discriminate a binary objective.
In summary, while our method shares high-level similarities with [1] in terms of using generative and contrastive objectives, the comparison should not overlook the **different modalities and unique challenges** each work addresses. [1] deals with scene graphs, a static graphical structure, using an off-the-shelf graph encoder. Our work handles fMRI, a dynamic and biologically complex modality. Our problems involve understanding dynamic data subject to hemodynamic responses and the biological interplay of various brain regions - a quite different beast.
We hope our work can encourage more research in this field and lead to more novel development in methods and applications. We are confident that with further reflection, the value of our approach will become more apparent.
> 3. two many tricks. no central point
**Response:** We apologize for the confusion. To clarify, all our modules and tricks are **summarized and depicted graphically** in Figure 2. Critical modules and methods are **color-coded and represented by different shapes** in Figure 2. Our technical contributions are summarized on Page 2 of our paper with **5 bullet points**, each containing two short sentences. As suggested, we will revise the paper to make our contributions and method descriptions clearer to improve readability.
> 4. I have reproduced the method, the results seem not as good
**Response:** We thank the reviewer for strong interest in our work and his/her effort to try to reproduce our findings. We are thrilled to see the level of engagement our work has elicited. To clarify, we assure you that we have presented a representative range of qualitative results of the generated videos in our paper. More importantly, we also evaluated the performance based on quantitative metrics and compared with other methods as presented in the original paper. Given this doubt from the reviewer, we have now included full generation samples in the ”Official Comment”.
It would be great to know if such quantitative evaluations have been reproduced using the same data following the same process. It is worth noting that differences in outcomes could be due to various factors, including whether we are working with full samples or subsamples, data processing approaches, parameter settings, or the specific pretrain checkpoints used.
Our codes will be made publicly available upon publication to ensure a fair reproduction for the community.
Regarding the style inconsistency, we suspect the reviewer's concern is related to the visually distinct outcomes of generated results for sample inputs at each sampling. We acknowledge that this is a typical characteristic of the probabilistic nature inherent in the diffusion model, which represents one of the limitations of the diffusion-based decoding approach. However, despite these variations, **the semantic contents and evaluation metrics of the generated results remain consistent, as demonstrated in our paper**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification, and it has addressed all my concerns. I will raise my rating.
---
Reply to Comment 1.1.1:
Comment: Many thanks for your support! We truly appreciate your precious time and valuable suggestions. | Summary: The paper presents a pipeline to decode videos from fMRI brain activity data. The pipeline is divided into several consecutive steps. First, a Transformer-based autoencoder is trained on an unsupervised Masked Brain Modelling task to learn fMRI representations on a large corpus of fMRI data. The fMRI encoder is then augmented with a spatial and temporal attention modules to enable the processing of temporal frames. Second, the fMRI encoder is further trained on a contrastive alignment task where fMRI representations are pulled closer to CLIP-based image and text representations of the corresponding video frames. Finally, a Stable Diffusion UNet model is augmented with temporal attention to allow its conditioning on the latents of the two previous frames and is finetuned end-to-end with the fMRI encoder. Experiments on a public dataset of participants watching videos inside an fMRI scanner show better performance (SSIM) as compared to existing baselines. Ablation studies are used to evaluate the impact of windowing hyperparameters and of different design choices. Attention weights from different Transformer layers are projected on a cortical map and visualized to highlight correspondence with different brain networks.
Strengths: Originality: The combination of self-supervised learning, contrastive alignment in image/text latent space and conditional video generation for fMRI-to-video decoding is novel.
Quality: Most claims are supported (see below for possible exception to this). Qualitative and quantitative results are convincing.
Clarity: The manuscript is overall clearly written and well organized.
Significance: As one of the first fMRI-to-video approaches, this work is likely to inspire other work in the brain decoding literature. The qualitative and quantitative results suggest this is a clear improvement over existing baselines.
Weaknesses: Quality: The underlying core claim of the submission, i.e. that the dynamic information contained in brain activity data can be used to reconstruct videos, might not be fully supported in the experimental settings presented in the paper (see Question 4). Essentially, it is not clear to me that the presented results prove temporally evolving brain activity can be decoded into videos; rather, it seems to suggest existing video diffusion models can be conditioned on a temporally-fixed (i.e. coming from time t) brain-derived latent representation.
Clarity: I found the description of the inputs at the contrastive learning stage unclear (Q3).
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. What is the impact of the masked modelling pretraining on downstream performance? Given the training budget required for this step of the pipeline, it might be important to know the order of performance improvement it can bring.
2. I do not fully understand the idea behind the "network inflation trick" mentioned at line 157. Am I correct in thinking that what is called a batch here is built by taking consecutive fMRI frames, rather than by randomly sampling fMRI frames across e.g. a recording?
3. At the contrastive learning stage, what are the inputs to the fMRI encoder? Is it multiple fMRI frames, in which case there is a sequence of latent predictions per video segment, or is it still single fMRI frames matched to a single image frame? (The answer to this question might influence the relevance of the next question.)
4. I understand from line 231 that the decoded videos are generated from a single fMRI frame (yielding 6 video frames at 3 fps). In this particular case, the diffusion model is used in a similar setting as an fMRI-to-image model, i.e. it is conditioned by a unique brain-derived latent collected at time t. The resulting video could therefore be seen as an "hallucination" of the diffusion model based on an initial frame, rather than a truly dynamic decoded video, i.e. where temporal changes in brain activity drive changes in video frames. I assume that this might change if longer videos were generated e.g. based on 2 consecutive TRs. Is that something you have tried and if so, do the resulting videos remain temporally consistent when the conditioning is updated to a new fMRI frame?
5. Results of Table 1 show that removing adversarial guidance has a very mild effect on performance, especially for the video-based semantic metric. However corresponding results in Figure C2 suggests removing adversarial guidance dramatically impacts decoding performance. Does this just happen to be an unrepresentative example?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful for your appreciation of the novelty and potential impact of our work and your important suggestions. Our point-by-point responses are as follows.
> 1. What is the impact of the masked modelling pretraining?
**Response:** We thank the reviewer for raising this important point. We have now conducted additional ablation studies to thoroughly examine the impact of Masked Brain Modeling (MBM) in our approach. Specifically, we performed two experiments:
1. We conducted an experiment where we **excluded the MBM pre-training** from our proposed model.
2. We performed another experiment where **both the MBM pre-training and contrastive training were excluded** from our proposed model.
For evaluation, we employed 50-way, top-1 identification tests, as detailed in the paper, along with Structural Similarity Index (SSIM) metrics. The results are tabulated below.
| | 50-Way, Top-1 Accuracy| 50-Way, Top-1 Accuracy | |
|-|-|-|-|
| | Image Identification Tests | Video Identification Tests | SSIM |
| Full Model | **0.172 +- 0.01** | **0.202 +- 0.02** | **0.171**|
| w/o MBM, w/ Contrastive | 0.122 +- 0.012| 0.169 +- 0.015 | 0.143 |
| w/o MBM, w/o Contrastive | 0.076 +- 0.008| 0.138 +- 0.013 | 0.123 |
As you can see from the Table, we observed a significant drop in performance when excluding MBM from our model, and this drop further intensified when both MBM and contrastive learning were excluded. Moreover, the visual quality of the generated results aligned well with these quantitative metrics. We will include both in our revised submission.
> 2. I do not fully understand the idea behind the "network inflation trick" mentioned at line 157.
**Response:** The batch here consists of **consecutive fMRI frames from a sliding time window**. The network inflation trick is a technique that enables the model to process an extra dimension by rearranging the input data shapes. For example, the transformer designed in previous work [6] for image reconstructions can only take input with 3 dimensions: n, p, and b, where n is the batch size and the last two dimensions will be used in the attention calculation in the transformer. Now we want to add an extra time dimension without changing the structure of the transformer such that the pre-trained weights provided in [6] can still be used. So to handle an input of shape (n, w, p, b), where w is the extra time dimension (sliding time window size), we can just merge two dimensions of the input into either (nw, p, b) or (np, w, b). When the dimension becomes (nw, p, b), the attention layer in the transformer is learning spatial correlations of the input, thus called spatial attention. When the dimension becomes (np, w, b), it learns the temporal correlations, thus called temporal attention. Combining spatial attention and temporal attention enables our model to learn spatial and temporal information from consecutive fMRI frames.
> 3. I found the description of the inputs at the contrastive learning stage unclear (Q3).
**Response:** We apologize for the confusion. The input to the fMRI encoder at the contrastive learning stage is **a sliding time window of fMRI, i.e., multiple fMRI frames, which are matched to a video clip**. We will provide more details in our response to the next question. The output of the fMRI encoder is an embedding that is learned by considering both the spatial and the temporal features of the fMRI frames. We will revise our manuscript to make this point clearer.
> 4. Question 4
**Response:** We apologize again for the confusion. In line 231 of our paper, we are meant to say each fMRI frame corresponds to 2 seconds of video, as the TR of fMRI scanning is 2 seconds. However, considering the nature of hemodynamic response function of the human brain, a 2-second video may be encoded by fMRI frames at both t and t+1. Therefore, we create a sliding time window that groups multiple consecutive fMRI frames, which is used as input to the fMRI encoder. For example, consider a consecutive fMRI and videos represented by [f1, f2, f3, f4] and [v1, v2, v3, v4]. Assuming a window size of 2, we will use [(f1, f2, v1), (f2, f3, v2), (f3, f4, v3) … ] as the (fMRIs, video) pairs for training and testing. As we can see, the input to our pipeline is actually consecutive fMRI frames from a sliding time window. For consecutive fMRI-based sliding windows, the decoding results will be highly similar if they are matched to a similar groundtruth. However, there could be some style inconsistency between consecutive fMRI windows due to the probabilistic nature of the diffusion model, which we believe could be better improved in future research.
Another clarification is that we only reconstructed a 2-second video each time because of memory limitation of our GPU. Longer video generation requires larger GPU memory. Nevertheless, thanks to the sliding time window design, we can theoretically expand our approach and reconstruct longer videos with a larger fMRI window size with enough GPU memory.
> 5. Results of Table 1 show that removing adversarial guidance has a very mild effect.
**Response:** To clarify, without adversarial guidance, most generated videos are visually worse than our full method. In Table 1, it does drop from 0.172 to 0.117 in frame-based semantic level metrics and from 0.171 to 0.152 in the SSIM. However, even though the results are visually worse, **some low-level features are still generated**, such as animal-like moving objects, blue burry water-like objects, etc, with matching motions, which can still be classified into a correct semantic category in the metrics. Overall, adversarial guidance is still helpful.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the additional ablations and for the clarifications.
Thank you also for providing the generated videos. I looked through many of them and while there are clear success cases where the generation looks a lot like the stimuli, there are also many examples where the generation appears unrelated to the ground truth category. In some rarer cases the generation even looks like noise or is devoid of any clear semantics. I am concerned that the current figures in the manuscript (Fig. 1, 4 and 5) along with the description of the results and the discussion do not clearly mention or show examples of these failure cases.
As the approach is novel and the results offer a strong baseline for further work on this topic I keep my current score. However I believe the manuscript should reflect more transparently what a significant portion of the generated videos look like.
---
Reply to Comment 1.1.1:
Title: Response to the Revewer's Comment
Comment: Thanks for your reply! We agree and understand your concern. We discussed the failure cases in the supplementary material of our original submission. In the revision, we will include more failure cases and add a clear notation for the failure cases in the discussion of the results. We will also discuss the failure cases in the limitation section to give a clearer overview and explanation for the failure cases.
In short, the failure cases can be attributed into two categories.
1. Lack of pixel-level controllability. Due to the probabilistic nature of the diffusion model and the current conditioning method, the generation process lacks strong control from the fMRI latent to generate strictly matching low-level features, such as shapes, color, and geometric information. We believe this would be an important perspective for future research on this task.
2. Uncontrollable factors during the scan. Mind wandering and imagination of the subject are usually inevitable during the scan. It has been shown that imagination is involved and can be decoded to some extent from the visual cortex [4], which can lead to mismatching between the ground truth and the generation results.
We would also like to highlight that the quantitative evaluation across all samples reported in our original submission is compared with the reported results from the literature. We also compare with Kupershmidt, 2022 [12] and Wen, 2018 [10] using their released videos (partial testing set), as shown in the table below. We achieved better numeric results in these comparisons.
Thanks again for the suggestions, which enhanced the quality of our manuscript! We hope this reply has addressed your concern.
| | 50-Way, Top-1 Accuracy | 50-Way, Top-1 Accuracy |
|------------------------------------ |-------------------------------------|------------------------------------|
| | Image Identification Tests | Video Identification Tests |
| Ours | **0.195 +- 0.016** | **0.265 +- 0.02** |
| Kupershmidt, 2022 | 0.179 +- 0.017 | 0.238 +- 0.02 |
| Wen, 2018 | 0.07 +- 0.01 | 0.166 +- 0.016 |
[4] G. Shen, T. Horikawa, K. Majima, and Y. Kamitani, “Deep image reconstruction from human brain activity,” PLoS computational biology, vol. 15, no. 1, p. e1006633, 2019
[10] H. Wen, J. Shi, Y. Zhang, K.-H. Lu, J. Cao, and Z. Liu, “Neural encoding and decoding with deep learning for dynamic natural vision,” Cerebral cortex, vol. 28, no. 12, pp. 4136–4160, 2018
[12] G. Kupershmidt, R. Beliy, G. Gaziv, and M. Irani, “A penny for your (visual) thoughts: Self-supervised reconstruction of natural movies from brain activity,” arXiv preprint arXiv:2206.03544, 2022 | Summary: This research proposes a method called Mind-Video to reconstruct videos from brain activity. By utilizing continuous fMRI data and advanced techniques, Mind-Video can generate high-quality videos with arbitrary frame rates. The model outperforms previous methods in accuracy and structural similarity.
Strengths: The present work contributes with an innovative approach for reconstructing continuous visual experiences from brain activities. By utilizing masked brain modeling, multimodal contrastive learning with spatiotemporal attention, and co-training with an augmented Stable Diffusion model, their method, called Mind-Video, surpasses previous techniques in reconstructing high-quality videos.
This paper presents a meticulous study of previous work, which is important in the development of the present work. Also, the technical aspects are clearly explained and have also been evaluated using the correct metrics.
The experimental evaluation of the proposed model demonstrates its superior performance compared with other works.
The developed methodology provides interpretability, which is a very important factor in medical applications.
The interpretation of the results is good, which strengthens the results and the paper in general.
In summary, the work is an interesting application of deep learning in the medical area, and it also has a remarkable novelty.
Weaknesses: The work is very interesting, the resulting video resembles the ground true in terms of activity, but in terms of scene there is still a considerable difference.
In the "Paired fMRI-Video dataset" part, only three subjects are used, this is a limitation in terms of generalization. It is a limitation but it would be interesting to use more subjects and have more generalizable results. Nevertheless, it is ahead of several previous works.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The limitations are well explained by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the strong support and the positive comments on our work. Your inspiring questions and comments are valuable for our future work. Our point-by-point responses are as follows.
> 1. The work is very interesting, the resulting video resembles the ground true in terms of activity, but in terms of scene there is still a considerable difference.
**Response:** We appreciate the useful feedback on the generated videos. We agree that while our model captures activity patterns well, scene reconstruction still presents challenges. This stems from the significantly **lower signal-to-noise ratio and increased complexities in the spatial-temporal dynamics** in the paired fMRI-video data compared to the paired fMRI-image data. Furthermore, there is higher inherent variability in **individuals' imaginations due to the dynamic nature of videos (in contrast to static images)**. In the proposed model, we focused more on the video semantic recovery than the low-level visual features. We are aligned with the reviewer that this limitation should be addressed in future research.
> 2. In the "Paired fMRI-Video dataset" part, only three subjects are used, this is a limitation in terms of generalization. It is a limitation, but it would be interesting to use more subjects and have more generalizable results. Nevertheless, it is ahead of several previous works.
**Response:** We appreciate the reviewer sharing his/her valuable perspective on this work. **We totally agree that utilizing more subjects can help evaluate and potentially enhance the generalizability of our method.** Unfortunately, this is subject to the availability of suitable fMRI-video datasets currently. This is a common problem for this type of neuroscience application. We will acknowledge cross-subject model generalization as a crucial **future work direction** in the revised manuscript. More work is needed to perform well-designed brain recording experiments and methodological development to increase the generalizability and interpretability.
---
Rebuttal Comment 1.1:
Title: Final remarks
Comment: I appreciate your prompt and comprehensive response to my review of your paper. Your detailed answers have better explained various aspects of the paper, and I find them very useful to better understand the approach and direction of the paper.
Your explanation of the challenges in scene reconstruction and activity pattern capture are insightful and aligns with my observations. I appreciate your acknowledgment of this limitation and your plans to address it in future research.
In regard to the "Paired fMRI-Video dataset" limitation, I understand the constraints imposed by the availability of suitable datasets. Your willingness to acknowledge this limitation and incorporate the importance of model generalization in future research revisions is a step in the right direction. I agree that well-designed brain recording experiments and methodological development are necessary to enhance generalization and interpretability.
Once again, I would like to express my appreciation for your responses to my comments. Your willingness to engage in discussions reflects your dedication to advancing your research and addressing the concerns of the reviewers. I am confident that your efforts will contribute positively to the field of deep learning in medical applications.
---
Reply to Comment 1.1.1:
Comment: Many thanks again for your support! We sincerely appreciate your valuable comments and your precious time and efforts in reviewing our paper! | Rebuttal 1:
Rebuttal: We are grateful to all five reviewers and AC/SACs for their valuable time, insightful comments, and useful suggestions. We will carefully revise our paper according to the comments. Our point-by-point response to the reviewers’ comments has been added to the individual chat box for each reviewer. We believe that the revised manuscript has been enhanced and the concerns have been well addressed.
Due to the character limitation, citations mentioned in the rebuttal are included below.
[1] Yang L, Huang Z, Song Y, et al. Diffusion-based scene graph to image generation with masked contrastive pre-training[J]. arXiv preprint arXiv:2211.11138, 2022.
[2] Wu J Z, Ge Y, Wang X, et al. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation[J]. arXiv preprint arXiv:2212.11565, 2022.
[3] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.
[6] Z. Chen, J. Qing, T. Xiang, W. L. Yue, and J. H. Zhou, “Seeing beyond the brain: Masked modeling conditioned diffusion model for human vision decoding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors have developed an fMRI to video model trained using contrastive learning and stable diffusion. The generted videos are evaluated based on the semantics of their content and pixel level metrics at video and frame level, some of which utilize pretrained classifiers trained on ImageNet and VideoMAE. The work builds on top of and shares attributes with the MinD-Vis model including the MBM approach. It constitues of two modules: fMRI encoder and a video generative model, trained separately, and finetuned together.
Strengths: The paper proposes a sound architecture constitued of Spatial and Temporal attention, multimodal contrastive learning, adversarial guidance, and diffusion models. The end2end pre-processing and training process, including usage of pretrained models such as BLIP for video captioning have a few introguing novelties.
Weaknesses: Eventhough reference [6] is limited to generating images, the differentiating factors between the current work and reference [6] is better be highlighted in more details.
In the "Adversarial Guidanc for fMRI", there is a claim about using the "average all fMRI in the testing set as the negative guidance". Usage of the "testing" set in the training process, even in its aggregated (averaged) sense is not the best design.
Given the complexity of the architecture, the ablation study could have been improved to highlight the impact of more components.
Assuming the trainable modules require re-training per subject, the proposed design raises concerns around practicality of the approach.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: In the "Multimodal Contrastive Learning" subsection, the idea of {fMRI, video, caption} triplet needs further explanation. If embeddings of fMRI, video and caption encoded via three separate encoders, what does the "augmented endcoder" refer to in this context?
In the same subsection, the idea of concatenating captions with "Then" in-between is discussed. How impactful was the addition of "Then" keyword in-between the two captions? Did you run an ablation study that demonstrates positive effect?
In the "Scene-Dynamic Sparse Causal (SC) Attention" subsection, with regards to the attention maps, we wonder if the authors monitored attention-maps across transition frames; i.e., did they observe the attention to previous frames (i-2 and i-1) to disipate during transitions?
Do all trainable models (e.g. SC-MBM Encoder, etc.) require re-training for each subject?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Yes.
The intra-subject limitation of the method is highlighted; however, it is not clear whether the training process was repeated for each subject independently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your recognition of our contributions and novelty. Our point-by-point responses to the comments are as follows.
> 1. The differentiating factors between the current work and [6] is better be highlighted.
**Response:** We thank the reviewer for this important point. Our work extends beyond [6] by tackling the fMRI-based video reconstruction problem. Unlike image reconstruction, this adds another level of complexity. The key differences can be summarized in the following. We will revise our paper to highlight this point.
- **Problem formulation**. In [6], dynamic fMRI recordings are averaged to create a “snapshot”. While in this work, **dynamic fMRI time-series** is directly used to recover a video, which requires considering the **spatial features** and the **temporal features** of fMRI. Additionally, the **hemodynamic response** is a significant challenge in our work, making the one-to-one mapping between fMRI and video even more difficult.
- **Architecture**. To address the unique challenges, we made two key improvements. First, we enhanced the fMRI encoder to handle a sliding time window of fMRI, capturing spatial and temporal information with distinct attention heads. Second, we employed **multimodal contrastive learning** to align fMRI with the semantic space of text and images before the co-training. This contrasts [6] in which co-training was performed directly without contrastive learning.
> 2. Usage of the "testing" set in the training process is not the best design.
**Response:** To clarify, the adversarial guidance was only used in the **inference stage (testing)** to create a stronger condition and to increase the signal-to-noise ratio of the testing fMRI dataset. The averaged testing fMRI data was not involved in any part of the training process.
> 3. Given the complexity of the architecture, the ablation study could have been improved.
**Response:** As recommended, we have now conducted additional ablation studies to examine the impact of two crucial components in our approach: masked brain modeling (MBM) pre-training and contrastive training.
We performed two new experiments on top of the existing ablation experiments in the table below. The first experiment involves **excluding the MBM pre-training**, while the second experiment **removes both the MBM pre-training and the contrastive training** from the proposed method.
The results show the importance of MBM and contrastive training for all metrics. The performance drops as much as 55% without both components and 30% without only MBM. The visual quality of the generated videos also follows a similar trend, which will be detailed in our revised manuscript.
Again, we sincerely appreciate your comment. These additional experiments strengthen the empirical evidence supporting our proposed approach.
| | 50-Way, Top-1 Accuracy| 50-Way, Top-1 Accuracy| |
|-|-|-|-|
| | Image Identification Tests| Video Identification Tests | SSIM |
| Full Model| **0.172 +- 0.01**| **0.202 +- 0.02**| **0.171**|
| w/o MBM, w/ Contrastive | 0.122 +- 0.012| 0.169 +- 0.015| 0.143 |
| w/o MBM, w/o Contrastive | 0.076 +- 0.008| 0.138 +- 0.013| 0.123 |
> 4. Assuming the trainable modules require re-training per subject, the proposed design raises concerns around practicality.
**Response:** We acknowledge the practical concern regarding per-subject re-training process. But it is worth noting that almost all the existing methods for this specific neuroscience application, i.e., “brain decoding”, rely on per-subject training due to high inter-subject variability and limited datasets. Cross-subject model generalization remains an **open problem** and an exciting **future work direction** of our group and others.
> 5. What does the "augmented encoder" refer to in this context?
**Response:** The fMRI encoder in [6] can only process a single fMRI frame without considering the temporal dynamics of the fMRI recordings (i.e., multiple frames). We changed its architecture to **encode multiple fMRI frames in a sliding time window manner**, considering **spatial and temporal correlations** during feature learning. Thus, it is called an “augmented encoder”.
> 6. How impactful was the addition of "Then" keyword in-between the two captions?
**Response**: We apologize for the confusion. To clarify, "Then" concatenation was only used in the augmented stable diffusion training (not the contrastive learning). This stage does not impact the fMRI feature learning process significantly, thus having **minimal influence on the final results**. To improve clarity, we will relocate the captions of the concatenation part to the stable diffusion training subsection in our revision.
> 7. We wonder if the authors monitored attention-maps across transition frames?
**Response:** Yes, we did monitor attention maps across transition frames. However, we **did not observe** that the attention to the previous frames noticeably dissipated during the transitions, which led to frames containing parts of the content from the two scenes.
This can be attributed to the nature of our generated videos: they span only 2 seconds, mirroring the 2-second TR of the fMRI. Given this brief duration, distinguishing between transition and non-transition frames becomes inherently challenging. However, the scene-changing decoding is an exciting question and an unexplored field worth further research.
> 8. Do all trainable models require re-training for each subject?
> 9. it is not clear whether the training process was repeated for each subject independently.
**Response:** No. The most resource-consuming part is the large-scale pretraining of the MBM encoder, which **does not require re-training** for each subject. But we do **need to finetune** both the MBM Encoder and the generative model using each subject's fMRI. However, this process is **not computationally expensive**. We will make this part clearer in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the updates, responses and further evaluations. My current rating remains valid.
---
Reply to Comment 1.1.1:
Comment: Thanks again for your time and efforts. We really appreciate your support! | null | null | null | null | null | null |
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer | Accept (poster) | Summary: This paper proposes an efficient accelerating framework for vision transformer, ShiftAddViT, which reparameterizes pre-trained ViTs with a mixture of complementary multiplication primitives and MoE designs. Specifically, All MatMuls in self-attention modules are reparameterized by additive kernels, and the remaining linear layers and MLPs are reparameterized by shift kernels. Besides, they develop a new MoE system for maintaining accuracy after reparameterization, and use a latency-aware load-balancing loss term to assign a dynamic amount of input tokens to each expert. Extensive experiments on various 2D/3D Transformer-based vision models demonstrate their superiority and efficiency.
Strengths: This paper is well-written and easy to follow. Converting multiplication operations to additive and shift is a promising technique in model compression and accleration, this paper provides a systematic design solutions for vision transformer architectures, which is interesting.
Experiments in this paper are sufficient, including 2D and 3D task., and they provides detailed latency comparison for different methods, which much helps prove their effectiveness.
Weaknesses: ShiftAddViT is a complex system design solution, where the acceleration comes from the replacing of additive and shift operations with multiplication, the maintaining accuracy relies on the MoE design. And ShiftAddViT utils TVM to implement and optimize their customized kernels for practical hardware deployment on GPUs. In this paper, the contributions of multiplation less and MoE are much independent, this paper seems not much suitable for algorithm-based conferences like NeurIPS/CVPR/ICML.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the most important contributions in this paper?Multiplacation-less solution and MoE seems indepedent, It is a common sence that MoE could boost the performance on various tasks, improving accuracy with MoE seems unnecessary in this paper.
2. Re-parameterizing usually refers to one kind of mathematical equivalence transformations, such as RepVGG [1], AC-Net [2], what is the re-parameterizing in this paper?
3. ShiftAddNet [3] provides the first scheme for the replacing of shift and additive opration with multiplication, so what is the weakness for comination of ShiftAddNet and ViT architectures, could you show the comparision with ShiftAddViT.
4. When quantized to 4/8-bit, original ViT models could also achieve high performance and better acceleration, could you compare the quantized ViT with ShiftAddViT?
5. More complex task is better, such as detection and segementation.
[1] Ding X, Zhang X, Ma N, et al. Repvgg: Making vgg-style convnets great again[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 13733-13742.
[2] Ding X, Guo Y, Ding G, et al. Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 1911-1920.
[3]You H, Chen X, Zhang Y, et al. Shiftaddnet: A hardware-inspired deep network[J]. Advances in Neural Information Processing Systems, 2020, 33: 2771-2783.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: This paper provides the the limitation and societal impact discussion for their proposed technique.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful review and constructive suggestions. Below are our detailed responses to your concerns.
**W1: The contributions of multiplication less and MoE are independent? Not very suitable for algorithm conferences like NeurIPS/CVPR/ICML?**
We thank the reviewer for acknowledging our system design and practical deployment on GPUs.
*As for the contribution*, we want to clarify that multiplication-less and MoE are linked in one organic whole. Unlike previous MoE works where all experts are of the same capacity and there is no distinction between tokens, our MoE layers combine unbalanced experts, i.e., multiplication and shift, with a hypothesis to divide important object tokens and background tokens as also visualized in Figure 6. Such a new MoE design offers a softer version of ShiftAddViTs, i.e., instead of aggressively replacing all multiplications with cheaper shifts and adds, we keep the powerful multiplication option to handle importance tokens for maintaining accuracy while leaving all remaining unimportant tokens being processed by cheaper bitwise shifts, winning a better accuracy and efficiency tradeoff.
*As for the venues*, we humbly clarify that the topic of multiplication-reduced network design is suitable and of great interest to algorithm conference audiences. For example, AdderNet [CVPR’20 Oral], DeepShift [CVPRW’21], ShiftAddNet [NeurIPS’20], ShiftAddNAS [ICML’22], and Ecoformers [NeurIPS’22] adopt shift or add layers to replace multiplication-based operations in CNNs or Transformers. TVM optimization for real hardware deployment is used by previous works like HAWQ-V3 [ICML’21] as well.
---
**Q1: What are the most important contributions? Multiplication-less solution and MoE seem independent? MoE’s uniqueness in this work?**
Our contributions are the multiplication-reduced ViTs and a new load-balanced MoE framework for a soft alternative to ShiftAddViTs. Also, we provide the first-time systematic investigation of layer sensitivity, accuracy impact, allocation strategy, and hardware implementations when considering shift&add-based ViTs as also acknowledged by Reviewer FYmk.
The multiplication-less solution and MoE are effectively combined, and the load-balanced and input-adaptive MoE design is uniquely applicable to our ShfitAddViTs, please also refer to our reply to your W1 for a detailed analysis.
---
**Q2: What is the re-parameterizing in this paper?**
Sorry for not making it clear enough. We are not making our model arithmetically identical to the original ViTs. Instead, we inherit the pre-trained weight to parameterize shift or add layers. For example, we follow the below equation to reparameterize the shift weights based on the inherited weights:
$W_{shift} = S*2^P$, where $S = sign(W); P = round(log_2(abs(W)))$
We do need finetuning to mitigate the reconstruction loss and recover the accuracy. For example, both the above S and P are trainable during finetuning.
If we directly reparameterize multiplication-based models with mathematical equivalent shifts **and** adds, from an algorithm perspective, there will be extreme non-uniform weight distributions that are hard to quantize (see our reply to Q1 of Reviewer qgW6); from a hardware perspective, currently efficient multiplication implementations already use shift and add units. There will be no energy or latency savings. Our current solution replaces multiplication with either shift **or** add layers, thus saving more hardware resources.
---
**Q3: What is the weakness of applying ShiftAddNet to ViTs? Show the comparison with it.**
Weakness of ShiftAddNet and qualitative comparisons (We have also included these points in Sec. 2 & 4):
| ShiftAddNet | ShiftAddViT |
|:---|:---|
| Adopts cascaded shift layers and add layers → a doubled number of layers, parameters, and FLOPs | Keeps the original number of layers/parameters |
| Not suitable for the MatMuls among Q/K/V in attentions as those matrices are all activations | Well supports attention in Transformers |
| Has to train the whole model from scratch | Can inherit the pre-trained weights of ViTs |
| No speedups on GPUs (much slower training & inference) | Provides GPU optimizations |
| Not compatible with MoE | Compatible with MoE for switching between mixture multiplication primitives |
**Quantitative comparisons.** We apply ShiftAddNet on top of PVTv2-B0 and show the comparison in ***Table 7 of the attached PDF in our global response***. We see that ShiftAddViT achieves 54%, >50x, and 59% parameters, GPU latency, and energy savings at much more stable training (ShiftAddNet suffers from loss NaN when being applied to ViTs).
---
**Q4: Compare the quantized ViT with ShiftAddViT?**
Sure, we compare our ShiftAddViT with the latest 4-bit Transformer quantization work [1] and show the results in ***Table 8 of the attached PDF in our global response***. *In terms of latency*, we see that our method achieves on average 4.7x and 2.4x GPU speedups than 4-bit quantized attentions and MLPs, respectively. *In terms of accuracy*, [1] reveals that existing 4-bit fully quantized training algorithms still have around a 1 ~ 2.5% accuracy drop on server tasks while our ShiftAddViT achieves comparable or even higher accuracy than original ViTs.
[1] Training Transformers with 4-bit Integers, arXiv’23
---
**Q5: The more complex task is better.**
We follow the suggestion to extend the PVTv2-B0 (ShiftAddViT) backbone to detection (follow ViTDet [ECCV'22]) and segmentation (follow PVTv2) tasks as shown in ***Table 4 of the attached PDF in our global response***.
*For the detection task*, the ShiftAdd-based backbone achieves 63.7% and 58.0% latency and energy reductions while keeping comparable mAP. *For the segmentation task*, the ShiftAdd-based backbone achieves 71.7% and 66.6% latency and energy reductions while keeping comparable mIoU.
We also extend our method to NLP tasks, please refer to our reply to Reviewer FYmk's W1.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for your response. The rebuttal is clear and easy to understand, all my concerns have been addressed. This work is good enough to be published.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer urMu
Comment: Dear Reviewer urMu,
Thank you for the prompt response and for raising the rating score! We are glad all your concerns have been thoroughly addressed and will incorporate the new discussion and experiment analysis into our final revised manuscript.
Best regards,
Paper 9955 Authors | Summary: The authors propose re-parameterizing ViTs to speed up inference without a full retraining. To do this, they introduce a new operation ShiftAddViT, which works well when applied to attention. For the MLP in a transformer, the authors use a mixture of experts. This operation reduces latency and energy usage, while maintaining accuracy.
Strengths: - The premise of leveraging power-of-two multiplications (a shift) is clever and a well-stated intuition (justified by Table 1 and also in L150-156). The additional energy analysis and chip-area statistics build ethos very well.
- The paper presents a very thorough set of experiments on ImageNet classification and novel view synthesis. The latency analysis is thorough and well-documented / reasoned.
Weaknesses: - My only qualm is that (it appears) this relies on the existence of an effective binary quantization algorithm in L176. The results convince me that this binary quantization is acceptable, but this seems aggressive (as the authors mention later, when replacing the MLP).
- From the introduction, I initially thought this re-parameterization would be arithmetically identical to the original. After reading the methods, it appears this isn't the case? In which case, wouldn't you need some fine-tuning? If so, how costly is the fine-tuning? If I'm wrong, this may be because I don't quite understand Figure 2 -- how is this related to the original matmul?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - I would've thought that the ShiftAddViT technique can really be applied to any operation (Clearly, that's not the case, as you added a modification for MLPs.) Why doesn't this work out of the box for any matrix multiplication?
- What is "ideal parallelism" in the footnote for Table 4? Are these latencies not actually measured end-to-end? If this is the case, do the baseline latency measurements also use the same assumption of ideal parallelism? How fast/slow is the MoE kernel when plugged into the network and *entire network is measured end-to-end?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: See above.
The evaluation is thorough, and the method is clearly presented. Despite the questions and objections I had above, I feel this is a cogently stated idea and clearly a thoroughly-investigated problem. I do still have some questions about the method and would like to know the answers, but the idea alone is definitely worth publishing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful review and constructive suggestions. Below are our detailed responses to your concerns.
**W1: The only qualm is that binary quantization seems too aggressive?**
We thank the reviewer for pointing out this. We examined the sensitivity of different parts in attention blocks, where V and attention scores are much more sensitive than Q or K. Quantizing or binarizing V or attention scores yield a significant accuracy drop, e.g., a 3.6% drop when quantizing V in DeiT-T. While binarizing Q and K to convert add layers can mostly maintain the accuracy. The rationale behind this phenomenon is that Q and K mainly serve for similarity measurement while attention scores and V are the actual activations that need higher precision to maintain feature richness. That is also why we perform binarization for K in KV multiplication and Q in the Q(KV) multiplication.
Our baseline Ecoformer also adopts binarization to Q and K but relies on a set of learned hash functions to map identical Q/K matrices during training. In our ShiftAddViT framework, we find that the standard layer-wise quantization is also robust for quantizing Q and K and they do not need to be identical.
---
**W2: The re-parameterization would not be arithmetically identical to the original? Need finetuning? How costly is the finetuning?**
Yes, you are right. We are not making our model arithmetically identical to the original ViTs. Instead, we inherit the pre-trained weight to parameterize shift or add layers. For example, we follow the below equation to reparameterize the shift weights based on the inherited weights:
$W_{shift} = S*2^P$, where $S = sign(W); P = round(log_2(abs(W)))$
We do need finetuning to mitigate the reconstruction loss and recover the accuracy. For example, both the above S and P are trainable during fine-tuning.
If we directly reparameterize multiplication-based models with mathematical equivalent shifts **and** adds, from an algorithm perspective, there will be extreme non-uniform weight distributions (see our reply to your Q1) that are hard to quantize; from a hardware perspective, currently efficient multiplication implementations already use shift and add units. There will be no energy or latency savings. Our current solution approximates multiplications with either shifts **or** adds, thus saving more hardware resources.
As suggested, we collect the training or finetuning wall-clock time, which is 21% ~ 25% less than training from scratch as the original Transformer model and >50x less than the previous ShiftAddNet [1] did on GPUs. For example, training the PVTv1-Tiny model from scratch takes 62 hours while our finetuning only needs 46 hours. We will add this information to the final revised manuscript to offer more insights into the training cost implications.
In addition, we would like to clarify that our main goal is efficient inference and deployment, as validated by the reported real-measured inference wall-clock times on RTX 3090 GPU in the submitted manuscript. The training or finetuning cost savings are by-products as we support to reparametrize ShiftAddViTs based on the weights from already available pre-trained checkpoints of original ViTs.
[1] ShiftAddNet: A Hardware-Inspired Deep Network, NeurIPS 2020
---
**Q1: Why the proposed ShiftAddViT can not work out of the box for any matrix multiplication? Why add a modification to MLPs?**
Good question. With W1, we are clear that our solution replaces multiplications with either shifts **or** adds instead of mathematically equivalent shifts **and** adds.
The reason why ShiftAddViT cannot work out of the box is that shift- and add-based models prefer different weight distributions from multiplication-based models. For example, while repeated additions can in principle replace any multiplicative mapping, they do so in a very parameter-inefficient way. A rough example is that input 8 multiplied by weight 2 can lead to output 16, while if we only use additions to get this output, we need to add weight 8, which is much larger than the previous weights. In contrast, power-of-2 quantization in shift layers is efficient but cannot span the entire continuous space of multiplicative mappings.
It is also commonly observed in the multiplication-less network community that shift- or add-based models suffer from slight accuracy drops due to less expressiveness [2] and demand additional techniques to fully recover the accuracy. This is why we propose to adopt a softer version, i.e., the mixture of experts, to mitigate the accuracy drop while preserving the benefits as much as possible to the best of our knowledge.
[2] AdderNet: Do We Really Need Multiplications in Deep Learning? CVPR 2020 Oral
---
**Q2: What is "ideal parallelism" in the footnote for Table 4? Do the baseline latency measurements also use the same assumption? How fast/slow is the MoE kernel when plugged into the network and the *entire* network is measured end-to-end?**
The "ideal parallelism" refers to a simulated scenario to mimic parallel computing, in particular, we optimize each expert separately and measure their latency, the maximum latency among all experts will be recorded and regarded as the latency of this MoE layer, and the aggregated total of the time spent for each layer is the final reported modularized model latency.
Since our baselines do not adopt the mixture of experts (MoE) so there is no need to assume "ideal parallelism". We only report the modularized latency for our models with MoE layers for readers’ reference, those latencies are not used for comparison.
We want to clarify that all our comparisons are made under fair conditions, i.e., we report the end-to-end wall-clock inference time in Table 3 and use them as our final latency when compared with all baselines instead of assuming the ideal parallelism (only providing a reference number). We will make this point clear in the final revised manuscript.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks to the authors for their thorough response; this was very helpful, and I hadn't realized the MoE was a mixture of multiplication *or add experts. The fine-tuning cost analysis was also welcome, although I also recognize that inference is the focus here. The above addresses my questions and concerns, so I stand by my rating!
(I think you can just submit multiple "rebuttals" per review, so the length limit per response doesn't really matter.)
---
Reply to Comment 1.1.1:
Title: Response to Reviewer qgW6
Comment: Dear Reviewer qgW6,
We thank you for the timely response and for maintaining the acceptance rating! We are glad all your questions and concerns are addressed and will incorporate the new discussion and analysis into our final revised manuscript.
We appreciate the given suggestions. As per this year's NeurIPS new policy, only one *"official rebuttal"* can be submitted in response to each reviewer. We will certainly follow up with *"official comments"* if other reviewers require further clarification or have additional questions.
Best regards,
Paper 9955 Authors | Summary: This paper proposes a new type of multiplication-reduced model ShiftAddViT, which use the additive kernels to reparameterize the batched GEMM in the attention block and uses the shift kernels to reparameterize other MLPs or linear layers. In this way, it can reduce energy-intensive multiplications. The authors utilize TVM to implement and optimize those kernels and achieve a speedup on GPUs, and also energy savings.
Strengths: - The proposed ShiftAddViT achieves significant acceleration across different models on different tasks.
- Experiments show the real speedup on GPUs to demonstrate the effectiveness of the proposed method.
Weaknesses: - Many implementation details of the proposed method are not clear enough.
- It is unfair for the comparison with DeiT. The proposed method first modified the model architecture, and then introduced the shift and add kernel to improve the model efficiency. When compared with DeiT, the architectures of those two models are not the same. The comparison should be made under the condition that the model architectures are kept the same.
- Based on Table 3 and Table 4, it appears that the “Shift” was not finally used in the classification task. And in Table 4, compared with the method that only enables “Add”, the final result (highlighted with red background) does not have any advantages in both model accuracy and latency. This seems to demonstrate that the “Shift” and “MoE” methods are not effective on classification tasks.
- Table 4 and Table 5 lack the result of method “Quant.”+”Shift”+”MoE”. It would be better if these results could be provided to see the impact on model accuracy and latency when all the proposed methods are adopted.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: About shift and add:
- I think the Shift method and the power-of-two quantization should be similar. But there is no scaling factor in the Shift operation (equation 3). In this way, how to convert weights in pre-trained models to the shift layers format? Is the nearest neighbor method directly adopted? Wouldn't the reconstruction error be too large without a scaling factor?
- How many bits of the P is used in the shift layers? And during finetuning, is the STE method used to solve the gradient backpropagation for s and P?
- Both two binary methods KSH[32] and Quant[26] use the scaling factors, but those scaling factors are not shown in Figure 1. During inference, where to multiply those scaling factors? Just after the add operation?
- How many bits are used for the input activations of the ShiftAddViT attention? Since only the fixed-point activations can be shifted, the input activation of shift layers needs to be quantized to an integer first. However, it seems that the article does not mention anything related to activation quantization. And there is no step related to the activation quantization in the two-stage finetuning process. In addition, if the activations are quantized, under the shift&add paradigm, where to multiply the scaling factor from activation quantization?
- How to quantize the input of the last Shift in ShiftAddViT Attention to integer values? Those input activations should also be fixed-point numbers, not floating-point.
About the MoE framework:
- In line 223, how to get the gate value $p_i$ based on the input token $x$? $p_i(x):= e^{p_i}/\Sigma_j^ne^{p_j}$ ? Is this a typo?
About the experiments:
- In Table 4, why the latency of PVT is larger than MHA? Is it because of the different batch sizes? For a fair comparison, it is better to use the same batch size here, since the model with batch_size=1 always gets worse throughput on GPUs. And it is better to show the speedup from the Linear Attention under the fair comparison.
- What’s the difference between line 5 and line 10 in Table 4? They have the same configs, but different results.
- Based on Table 4 and Table 6, “Shift” may cause an accuracy decrease, and it is necessary to introduce the “MoE” method to compensate for the model accuracy, which leads to an increase in latency. So overall, there is no advantage to using the “shift” method in terms of the model accuracy and latency. Does the “shift” method only for energy reduction?
- Experiments use the Eyeriss accelerator to get energy consumption. What are the configs of this accelerator? What data type is used for this accelerator? When counting energy, is the energy of memory access counted?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of the implementation of their work. It is necessary to use a customized hardware accelerator to fully leverage the benefits of “Shift” and “Add”.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Implementation details are not clear enough**
Sorry for not making it clear enough to you. We supplied more settings to Sec. 4 in the Appendix as we focused on the motivation and high-level idea in the main paper. We will clarify more and release code&models upon acceptance.
**W2: For DeiT results, the architectures should be the same?**
Yes, for fair comparisons, we compare to both MSA (original DeiT-T) and linear attention (LA; same architecture as ours) in ***Table 5 of the attached PDF in global response***.
We see that ShiftAddViT consistently works better, reducing 43 ~ 65% and 16 ~ 43% latency/energy with comparable accuracy ($\pm$0.2%). Here LA achieves better accuracy than MSA because we adopted the Norm. (in Attn) and DWConv (in MLP) following TransNormer [EMNLP'22] & EfficientViT [ICCV'23]. The new operators also explain the increased latency despite linear complexity.
The reason for building on LA is that Q/K is less sensitive than V/Attn (see our reply to Reviewer qgW6's W1).
**W3: The shift was not finally used? No advantage of final models? Shift and MoE are not effective?**
**The shift was not finally used?** We humbly clarify that the shift layer was also used in our MoE framework, i.e., Mult. expert + Shift expert, and thus was also used in the final model in Tables 3 & 4. I.e., MoE can be thought of as a soft alternative to the pure shift for parameterizing linear layers or MLPs to achieve better accuracy instead of being an orthogonal technique.
**The advantage of using shift.** Replacing Mult.-based MLPs with shift layers significantly reduces latency, energy, and chip area costs. Our shift kernel offers an average speedup of 2.35x/3.07x/1.16x compared to PyTorch FakeShift, TVM FakeShift, and TVM MatMuls, respectively (Figure 3). The seemingly minor latency improvements in Tables 3 & 4 are due to full optimization of the compiled model as a whole (e.g., 6.34ms → 1ms for PVTv2-B0) on GPUs *with sufficient chip area*. Most gains are concealed by data movements and system-level schedules.
Adopting shift layers substantially lowers energy and chip area usage (Table 1). ***Under the same chip areas***, latency savings are more pertinent, e.g., PVTv2-B0 w/ shift or MoE achieve 3.9 ~ 5.7x and 1.6 ~ 1.8x speedups, respectively, as summarized in ***Table 2 of the attached PDF in our global response***.
**MoE to compensate for accuracy drop.** MoE is a soft alternative to adopt both Mult. and shifts to achieve a better accuracy-efficiency trade-off, i.e., on average +1.36% accuracy gain with 46% ~ 53% shift benefits preserved.
**W4: Lack the results of “Quant. + Shift + MoE”.**
We conduct the requested experiment for PVTv2-B1:
1. Quant.: 78.70%
2. Quant. + Shift (Attn & MLPs): 77.55%
3. Quant. + Shift (Attn) + MoE (MLPs): 78.23%.
This is also consistent with our previously reported results in lines 5-7 of Table 6 when adopting KSH instead of Quant: "KSH + Shift (Attn) + MoE (MLPs)": 78.20%.
We humbly clarify that MoE is an alternative to Shift and they cannot be applied for the same layer. Let us know if we misunderstand your question.
**Q1: Implementation details about the shift & add?**
**Scaling factor for shifts? How to convert pre-trained weights?** We reparameterize shifts following DeepShift-PS [CVPRW'21] and do not use a scaling factor.
$W_{shift} = S*2^P$, where $S = sign(W); P = round(log_2(abs(W)))$
As both S & P are trainable during the finetuning, the reconstruction loss will be reduced.
**Bit allocation? STE used?** We adopt 4 bits for P and yes, STE is used following DeepShift.
**Where to multiply scaling factors for KSH and Quant.?** For KSH, there is no scaling factor needed as a set of hash functions is applied to convert Q/K to binary codes. For Quant., we leverage layer-wise Quant. for both Q & K, the scaling factor can be multiplied after add ops. It can be efficiently implemented following Sec. 2.2 of [1].
**Input activation Quant. for shifts? Last Shift?** The input activation of **all** shift layers is rounded to 16 bits of fixed-point precision format following DeepShift. We use layer-wise Quant. so the scaling factor can be multiplied after shift ops.
[1] Quant. & Training of NNs for Integer-Arithmetic-Only Inference, CVPR'18
**Q2: How to get gate value in MoE?**
Sorry for the confusion as we merge both gate and softmax into one equation. The gate itself is a linear layer and p = G(x) is the output gate value of dimension 2 (number of experts). The softmax is for normalizing p, and for efficiency, we use argmax to select one expert.
**Q3: Experiment details?**
**PVT slower than MSA? Different batch sizes (BS)?** We ensured BS=1 for all latency measurements and also found this counterintuitive phenomenon.
The reasons are two folds: (1) linear attention introduces extra ops, e.g., normalization, DWConv, or spatial reduction block that cost more time under small BS; and (2) the linear complexity is w.r.t. the number of tokens while the input resolution of 224 is not sufficiently large → limited benefits.
To validate both points, we measure the latency of PVTv2-B0 with various BS and input resolutions as shown in ***Table 3 of the attached PDF in our global response***: Linear attention’s benefits show up under larger BS or input resolutions.
**Difference between lines 5 & 10 in Table 4?** For line 5, we use KSH where Q & K are identical following Ecoformer; For line 10, Q & K are independent as we directly quantize both.
**Shift advantages? Only energy savings?** No, using shifts has comprehensive benefits. Please refer to our reply to your W3 for the analysis.
**Configs of Eyeriss? What data types? Memory access?** The configs like bit allocations are matched with our algorithm (e.g., INT32 for adds; INT16 for shifts). Data type and unit energy are reported in Table 1. We do count the memory access costs. More details can be found in DNN-Chip-Predictor [ICASSP'20] of which the contribution is the simulator of Eyeriss.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback. My concerns are mostly well addressed, so I raise my rating. I hope the authors could add these detailed illustrations about model quantization to the revised paper, to make it clearer and stronger.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer MmZd
Comment: Dear Reviewer MmZd,
Thank you for the prompt response and for raising the rating score! We are glad your questions and concerns are well addressed and will certainly include the detailed quantization illustrations in our final revised manuscript.
Best regards,
Paper 9955 Authors | Summary: This paper introduces a novel reparameterization method for efficient Vision Transformers (ViT). The method replaces the heavy multiplication operations in ViT with a combination of shift and add operations. By mapping queries and keys to binary codes in Hamming space and reparameterizing multi-layer perceptrons (MLPs) or linear layers, the multiplication operations in the model are effectively reduced. Experimental results demonstrate that ShiftAddViT achieves efficient performance on various 2D/3D Transformer visual tasks while achieving latency reduction and energy savings.
Strengths: This paper presents a clear motivation for each proposed compound. For example, the reparameterization of multiplications is inspired by hardware design practices and the concept of multiple experts (MOE) for dynamically routing tokens into multiplication or shift groups. The three questions posed by the authors clearly indicate the specific problems they are addressing. The writing and demonstration are clear and concise. In terms of results, this work delivers some impressive findings in regard to both efficiency and accuracy. Extensive ablation experiments are conducted to validate the proposed approach further.
Weaknesses: 1. A primary concern is the limited validation of the method proposed by the authors, which is solely performed on small-scale models, with the largest model encompassing 30M and 4G FLOPs. I am particularly interested in the performance of larger-scale models. Naturally, I acknowledge the constraints imposed by computational resources. However, considering the hardware specifications disclosed by the authors, it is apparent that training models of ViT-Base size or similar are feasible.
2. Secondly, my concern lies in the stability of the training process and the associated wall-clock time. Obtaining information regarding the overall training cost and wall-clock time from the authors would offer valuable insights into the algorithm's feasibility and cost implications.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Overall, I am satisfied with the results and motivation. I am particularly concerned about whether this method can improve inference performance for larger models. The bottleneck for small models is not particularly high, to begin with, and compared to larger models, the performance of small models will undoubtedly be lower. Therefore, if the authors' method can enable larger models to achieve a significant increase in inference speed, it would greatly enhance the significance of the approach.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The author discussed the limitation in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your careful review and constructive suggestions. Below are our detailed responses to your concerns.
**W1: Most experiments are small-scale models (largest: 30M parameter and 4G FLOPs), how about the performance of larger-scale models like ViT-Base size or similar?**
Thank you for your suggestion. As you advised, we further examine our proposed method on large-scale models, including PVTv2-B5 (Params: 82M; FLOPs: 11.8G) and DeiT-Base (Params: 86M; FLOPs: 17.6G) on ImageNet. The results are shown in the table below as well as in ***Table 6 of the attached PDF in our global response***. We can see that the proposed ShiftAddViT consistently performs better in terms of accuracy-efficiency tradeoffs, achieving 18.4% ~ 65.7% and 28.9% ~ 70.3% latency/energy reduction with comparable accuracy ($\pm$0.5%).
| Models | Methods | Accuracy (%) | Latency (ms) | Energy (mJ) |
|---|---|:---:|:---:|:---:|
| DeiT-Base | Linear Attention | 83.1 | 8.43 | 625.74 |
| DeiT-Base | **ShiftAddViT** | 82.9 | 6.88 | 185.80 |
| PVTv2-B5 | Linear SRA | 83.8 | 39.80 | 482.94 |
| PVTv2-B5 | **ShiftAddViT** | 83.3 | 13.66 | 343.37 |
As we mainly target mobile and small-to-medium model scale scenarios previously, this new set of large model experiments further validates the scalability and potential of our ShiftAddViT on large model settings. We will incorporate the new results into the revised manuscript.
---
**W2: Stability of the training process and the associated wall-clock time?**
The process of training or finetuning after reparameterizing Transformer models using shifts and adds demonstrates stability. We conducted experiments involving two-step finetuning. In the initial step, we transformed Multi-Head Self Attention (MSA) into linear attention and reparameterized all Matrix Multiplications (MatMuls) with additive layers. This was followed by finetuning to restore accuracy. In the subsequent step, we reparameterized MLPs or linear layers using shift or MoE layers, again finetuning for accuracy recovery.
As suggested, we collect the training or finetuning wall-clock time, which is 21% ~ 25% less than training from scratch as the original Transformer model and >50x less than the previous ShiftAddNet [1] did on GPUs. For example, training the PVTv1-Tiny model from scratch takes 62 hours while our finetuning only needs 46 hours. We will add this information to the final revised manuscript to offer more insights into the training cost implications.
In addition, we would like to clarify that our main goal is efficient inference and deployment, as validated by the reported real-measured inference wall-clock times on RTX 3090 GPU in the submitted manuscript. The training or finetuning cost savings are by-products as we support to reparametrize ShiftAddViTs based on the weights from already available pre-trained checkpoints of original ViTs.
[1] ShiftAddNet: A Hardware-Inspired Deep Network, NeurIPS 2020
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the larger models' experiments. Although we can see the performance has a potential drop, the improvement in inference time is notable. My concerns have been mainly addressed. I have raised my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer hFZt
Comment: Dear Reviewer hFZt,
Thank you for the prompt response and for raising the rating score! We are glad your concerns are addressed and will include the new results and corresponding analysis in our final revised manuscript.
Best regards,
Paper 9955 Authors | Rebuttal 1:
Rebuttal: **Dear ACs and Reviewers,**
First of all, we deeply appreciate the time and effort spent by you in providing the reviews, and truly value your effort, especially considering the substantial scale of a conference like NeurIPS.
We are immensely grateful for the positive feedback our paper has received. The accolades, including remarks about its excellent soundness and presentation, pioneering systematic investigation, impressive achievements in latency and energy savings, and the clarity and conciseness of both the writing and demonstrations, alongside the extensive and thorough experiments, are all deeply gratifying. It's particularly encouraging that these aspects have garnered unanimous appreciation from the reviewers.
Despite the commendations, we have also received inquiries from reviewers requesting additional experiments and further clarification. We have supplied the requested experiments and provided detailed clarifications to raised questions as summarized below.
---
**To summarize, the following experiments have been supplied:**
- *Extend to NLP tasks and more token scenarios*
- We have included the results in our rebuttal response to *Reviewer FYmk's W1*.
- Also refer to *Table 1* of the attached PDF for the organized result table.
- *Extend to larger models like ViT-Base or similar*
- We have included the results in our rebuttal response to *Reviewer hFZt’s W1*.
- Also refer to *Table 6* of the attached PDF for the organized result table.
- *DeiT comparison under the same architecture*
- We did not provide a result table in our response to *Reviewer MmZd's W2* due to length limitations.
- Please refer to *Table 5* of the attached PDF for the organized result table.
- *Ablation studies of adopting shift or not*
- We did not provide a result table in our response to *Reviewer MmZd's W3* due to length limitations.
- Please refer to *Table 2* of the attached PDF for the organized result table.
- *Ablation studies of varying batch sizes and input resolutions*
- We did not provide a result table in our response to *Reviewer MmZd's Q3* due to length limitations.
- Please refer to *Table 3* of the attached PDF for the organized result table.
- *Add baseline: compare to ShiftAddNet*
- We did not provide a result table in our response to *Reviewer urMu’s Q3* due to length limitations.
- Please refer to *Table 7* of the attached PDF for the organized result table.
- *Add baseline: compare to quantized ViTs*
- We did not provide a result table in our response to *Reviewer urMu’s Q4* due to length limitations.
- Please refer to *Table 8* of the attached PDF for the organized result table.
- *Extend to more complex tasks, such as detection and segmentation*
- We did not provide a result table in our response to *Reviewer urMu’s Q5* due to length limitations.
- Please refer to *Table 4* of the attached PDF for the organized result table.
---
**To summarize, the following questions have been clarified:**
- *MoE dispatching and the methodology for input allocation and parallel computing*
- We clarify this in our response to *Reviewer FYmk’s W2 and qgW6’s Q2*.
- *Training stability and wall-clock time*
- We clarify this in our response to *Reviewer hFZt’s W2 and qgW6’s W2*.
- *Implementation details*
- Scaling factors, bit allocation, and gate mechanism in ShiftAddViT
- We clarify this in our response to *Reviewer MmZd’s Q1 and Q2*.
- Difference between line 5 and line 10 in Table 4, i.e., KSH vs. Quant
- We clarify this in our response to *Reviewer MmZd’s Q3*.
- Configs of Eyeriss accelerator
- We clarify this in our response to *Reviewer MmZd’s Q3*.
- Lack the results of “Quant. + Shift + MoE”.
- We clarify this in our response to *Reviewer MmZd’s W4*.
- *The advantages and rationale of adopting shifts and MoE; The contributions of multiplication less and MoE are independent?*
- We clarify this in our response to *Reviewer MmZd’s W3/Q3 and urMu’s W1/Q1*.
- *Why PVT or Linear attention is slower than MSA*
- We clarify this in our response to *Reviewer MmZd’s Q3*.
- *Binary quantization seems too aggressive*
- We clarify this in our response to *Reviewer qgW6’s W1*.
- *The re-parameterization would not be arithmetically identical to the original?*
- We clarify this in our response to *Reviewer qgW6’s W2/Q1 and urMu’s Q2*.
- *Suitable for algorithm-based conferences like NeurIPS/CVPR/ICML?*
- We clarify this in our response to *Reviewer urMu’s W1*.
---
Regarding Reviewer MmZd's questions about implementation details, we've clarified concisely within the rebuttal's length limitations. We are open to providing further details if any points still seem unclear. To enhance reproducibility, we will release both the codebase and pre-trained models, enabling others to replicate our results effectively.
We would be much appreciated if you could check our rebuttal response and expect the new experiments and clarifications can solve your concerns. We are always willing to be involved in the discussion with you, please let us know if our responses do not resolve your concerns so that we can further clarify, thanks!
Best regards,
Paper 9955 Authors
Pdf: /pdf/26e6fc55572d3269277779f4734a62269656aacb.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work proposes ShiftAddViT, which is an efficient ViT reparameterization with a mixture of complementary multiplication primitives, such as bitwise shifts and adds.
The alternative parameterization (quantization) is carefully examined to allocate to different components (MHSA, MLP) in ViT.
In addition, the authors propose a mixture of experts (MoE) framework to classify input tokens and assign different primitives to best preserve accuracy.
The MoE framework is guided by latency-aware load-balancing loss.
Multiple empirical results are demonstrated including 2D ViT in image classification task, as well as GNT for NVS task.
Strengths: 1. The paper is clearly written and easy to follow.
2. Though shift and add reparameterization is not completely new (binary, ternary or power-of-2 quantization), this is a very pioneer work to systematically investigate layer sensitivity and accuracy impact, allocation strategy, and hardware implementations. The hardware benchmarks with considerable latency and energy savings are impressive.
3. The experimental analysis is strong. In addition to conventional classification task, the authors also provide results on NVS task.
Weaknesses: 1. Since this work proposes a collection of analysis and optimizations on MHSA and MLP, reducing the computation complexity and energy consumption, I wonder if this work is portable to NLP tasks, especially for LLMs with more tokens?
2. I cannot fully follow the token dispatching method mentioned in Section 4.2 and Figure 6. The authors did not discuss the methodology for input allocation and parallel computing issues in detail (line#227). Intuitively, if the important and sensitive tokens (yellow) in Figure 6 are fed into powerful experts (MULT.), while the rest are into SHIFT experts, are they still visible to each other (i.e. global receptive field)?
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Please refer to concerns raised in weaknesses.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your positive comments and constructive suggestions. Below are our detailed responses to your concerns.
**W1: I wonder if this work is portable to NLP tasks, especially for LLMs with more tokens?**
We follow your suggestion to test our proposed optimization of MHSA and MLPs to NLP tasks. In particular, we apply our methods to Transformer models and Long Range Arena (LRA) benchmarks consisting of sequences ranging from 1K to 16K tokens [1]. The results are shown in the table below as well as in ***Table 1 of the attached PDF in our global response***.
| Models | Listops (2K) | Retrieval (4K) | Text (4K) | Image (1K) | Average Accuracy | Latency (ms) | Energy (mJ) |
|---|---|:---:|:---:|:---:|:---:|:---:|:---:|
| Transformer | 37.10 | 79.35 | 65.02 | 38.20 | 54.92 | 84.54 | 139.83 |
| Reformer | 19.05 | 78.64 | 64.88 | 43.29 | 51.47 | 11.19 | 19.04 |
| Linformer | 37.25 | 79.37 | 55.91 | 37.48 | 52.59 | 12.13 | 19.68 |
| Performer | 18.80 | 78.62 | 63.81 | 37.07 | 49.58 | 11.93 | 18.74 |
| **ShiftAdd-Transformer** | 37.15 | 82.02 | 66.69 | 35.62 | **55.37** | **7.38** | **8.53** |
The results consistently show the superior performance of our proposed ShiftAdd Transformers in terms of both model accuracy (+0.45% ~ +5.79%) and efficiency (1.5x ~ 11.5x latency reductions and 2.2x ~ 16.4x energy reductions on an Eyeriss-like accelerator) as compared to original Transformer and other linear attention baselines, which means that our shift and add reparameterization and load-balanced MoE ideas are generally applicable to Transformer models and are agnostic to domains and tasks. We thank the reviewer for this comment as this set of experiments also enlarges the impact of our work.
[1] Long Range Arena: A Benchmark for Efficient Transformers, ICLR 2021
---
**W2: How to understand the dispatching method in Section 4.2 and Figure 6? Discuss the methodology for input allocation and parallel computing issues in detail (line#227), are input tokens of multiplication experts and shift experts still visible to each other (i.e., global receptive field)?**
Thanks for pointing out this question, we will clarify each point one by one.
***How to understand the dispatching.*** As for the dispatching method, our hypothesis is that important and sensitive tokens are expected to be handled by powerful multiplication experts while the rest are into cheaper shift experts as you also pointed out. The trainable router within the MoE layers automatically learns this dispatch assignment as the gradients decay and the loss converges to minima in loss landscapes. This learning process is guided by our proposed latency-aware load-balancing training loss function (integrated with classification loss) in Section 4.2. Figure 6 visualizes the actual learned dispatching pattern in our practically trained model to verify our hypothesis.
***Elaborate input allocation and parallel computing.*** Sorry for not fully expanding it due to the limited space. We elaborate on these two points below.
1. *For input allocation*, it is determined dynamically during runtime, we know the allocation only when the model is executed and we receive the router outputs. Therefore, the shape of expert input and corresponding indexes are dynamically changed. This can be handled by PyTorch with dynamic graphs while TVM expects static input shape. That is why we leverage the compiler support for dynamism as proposed in Nimble [2] on top of the Apache TVM to handle the dynamic input allocation.
2. *For parallel computing*, it means that different experts are run in parallel, this can be supported by several customized distributed training frameworks integrated with PyTorch, e.g., FasterMoE [3], and DeepSpeed [4]. In contrast, it remains nontrivial to support this in the TVM community. One option to simulate is to perform modularized optimization to mimic parallel computing, in particular, we optimize each expert separately and measure their latency, the maximum latency among all experts will be recorded and regarded as the latency of this MoE layer, and the aggregated total of the time spent for each layer is the final reported modularized model latency. To avoid any potential confusion between real-measured wall-clock time, i.e., no parallelism assumed, and simulated modularized latency, i.e., ideal parallelism assumed, we reported both for models containing MoE layers as shown in Tables 4 and 6 to offer more insights into the algorithm’s feasibility and cost implications.
***Maintain global receptive field.*** Good point, the input tokens of different experts are still visible to each other because we only split tokens for MLP layers while keeping the attention mechanism untouched, so that all tokens still attend to each other in attention layers to ensure the global receptive field. In addition, for PVTv2 models, there are depthwise convolutions in the middle of two MLP layers to exchange information between two experts’ outputs.
We appreciate the reviewer for raising the potential confusion readers may have and will clarify all these points in the final revised manuscript.
[2] Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference, MLSys 2021
[3] FasterMoE: Modeling and Optimizing Training of Large-Scale Dynamic Pre-Trained Models, ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2022
[4] DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale, ICML 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns are well addressed, thus I keep my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer FYmk
Comment: Dear Reviewer FYmk,
We thank you for the timely response and for keeping the acceptance rating! We are glad all your concerns are addressed and will incorporate the new experimental results and analysis into our final revised manuscript.
Best regards,
Paper 9955 Authors | null | null | null | null | null | null |
Decentralized Matrix Sensing: Statistical Guarantees and Fast Convergence | Accept (poster) | Summary: In this work, the authors proved the convergence of the distributed GD method on the over-parameterized matrix sensing problem. The convergence is based on the proposed in-network RIP condition. Numerical experiments are conducted to verify the theoretical findings.
Strengths: The paper is well-written and easy to follow. In my opinion, the results are novel and should be interesting to audiences in optimization and machine learning fields.
Weaknesses: Although I believe that the paper is theoretically sound, I feel that the proof techniques are mostly the same as [34], except the concept of in-network RIP condition. This may weaken the theoretical contribution of this work.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: (1) Equation (1): I think it should be trace(A_j * Z^*) instead of trace(A_j, Z^*) in the right term.
(2) Line 124: I think w_{ij} > 0 if and only if i and j can communicate?
(3)
(3) Theorem 1 and Remark on Line 333: it would be better if the authors could elaborate on the technical difficulty in proving the theorem, besides introducing the in-network RIP condition and modifying the proofs in [34].
(4) Corollary 1: the denominator in the left-hand side should be \|Z^*\| instead of \|Z^*\|^2?
(5) Line 355: A_i = (1/2) (S_i + S_i^T).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: Please see my questions in the previous section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge the query about the novelty of our convergence analysis relative to [34], providing an opportunity to highlight the unique features of our approach. While both methods share a two-phase structure, our proof contains technical details that substantially diverge from [34], a distinction recognized by the other Reviewers and elaborated below.
- **Different algorithms and consequences**: [34] employs the standard gradient algorithm,
$$ \bar{U}^{t+1} = \Big(I + \alpha \bar{\mathcal{A}}^*\bar{\mathcal{A}}(\bar{Z}^* - \bar{U}^t(\bar U^t)^{\top})\Big)\bar{U}^t, \qquad (1)$$
while the proposed scheme performs a *double* mixing update that intermingles iterates and local gradients:
$$U^{t+1} = \Big(\mathcal{W}^2 + \frac{\alpha}{m}\mathcal{A}^*\mathcal{A}(Z^{\star} - U^t(U^t)\Big)U^t. \qquad (2)$$
The difference in the updates (despite the lifting of variables), in particular the dissimilarity of the maps $\mathcal{A}$ and $\bar{\mathcal{A}}$, has the following consequences on the analysis
-(i) *Lack of RIP*: The proofs in [34] heavily rely on the RIP of $\bar{\mathcal{A}}$. **Notably, $\mathcal{A}$ does not in general fulfill the RIP**, except under an undesirable stringent condition on the agents' local sample size, which would break the *centralized* sample complexity achievable by our scheme following our analysis. This difference renders some steps of the proofs in [34] inapplicable to our algorithm and complicate the translation of analyses between the centralized and distributed contexts.
-(ii) *Which power-like method in Phase I for (2)?* Using the RIP of $\bar{\mathcal{A}}$, Phase I in [34] demonstrates that, for small $t$, $\bar{U}^{t+1}$ in (1) remains sufficiently close to the power method iterate $\bar{\mathcal{M}}^t U^0$, (i.e., $\||\bar{U}^{t+1}-\bar{\mathcal{M}}^t U^0\||=\mathcal O(\mu^3)$), which can be proved to be 'well-aligned' to the ground truth $\bar{Z}^\star$, where $$\bar{\mathcal{M}}:={I} + \alpha \bar{\mathcal{A}}^*\bar{\mathcal{A}}(\bar{Z}^{\star}) \qquad (3)$$ and $\mu$ is the size of the initialization.
While the choice of $\bar{\mathcal{M}}$ is straightforward for the gradient algorithm (1), **the counterpart of $\bar{\mathcal{M}}$ for the iterates (2)** (denoted hereafter by $\mathcal M$) **remains unclear**. An observation under the RIP of $\bar{\mathcal{A}}^*\bar{\mathcal{A}}(\bar{Z}^{\star})$ is that $\mathcal{J}\mathcal{A}^*\mathcal{A}(Z^{\star})\mathcal{J}$ fulfills the RIP as well. Consequently, a potential choice of $\mathcal M$ in line with (3) that allows us to closely emulate the proof in [34] could be $$\mathcal M=\mathcal{J} + \frac{\alpha}{m}\mathcal{J}\mathcal{A}^*\mathcal{A}(Z^{\star})\mathcal{J} \quad \text{or}\quad \mathcal M=\mathcal{W}^2 + \frac{\alpha}{m}\mathcal{J}\mathcal{A}^*\mathcal{A}(Z^{\star})\mathcal{J}, \quad (4) $$ enabling reliance on the RIP of the mappings appearing in such $\mathcal M$'s.
Following the steps of [34] with either of these choices leads to an unsatisfactory outcome: the discrepancy of the trajectories, $\||{U}^{t+1}-{\mathcal{M}}^t U^0\||$, becomes uncontrollable by the initialization size $\mu$ only (as instead in [34]), due to the network dependency of the mapping $\mathcal{A}$. This breaks the condition in [34] to exit Phase I.
Therefore, if one aims to avoid imposing a local RIP (thus a local sample size) condition, the proofs of [34] are not directly applicable. To circumvent this challenge, our key innovation lies in identifying the 'right' mapping $\mathcal M$, expressed as $$\mathcal M=\mathcal{W}^2 + \frac{\alpha}{m}\mathcal{A}^*\mathcal{A}(Z^{\star}),\qquad (5)$$ which in conjunction with the *new concept* of in-network RIP, enables us to demonstrate that $\||{U}^{t+1}-{\mathcal{M}}^t U^0\||$ is controllable solely by the initialization size $\mu$ without necessitating the RIP of $\mathcal{A}^*\mathcal{A}(Z^{\star})$ in $\mathcal M$ (thereby avoiding constraints on the local sample size). In essence, the new RIP serves as the linchpin to manage the distortion that the mapping $\mathcal M$ in (5) induces on the eigenvectors and eigenvalues of $Z^\star$ via a condition on the network connectivity $\rho$ while maintaining a decoupling from the size $\mu$ of the initialization. This subtle yet powerful modification illuminates a path forward, distinct from [34], that captures the decentralized nature of the algorithm.
-(iii) *Phase II: handling extra error terms:* The aforementioned difference of the mappings, along with the distributed nature of our algorithm, introduce additional complexity into the analysis of Phase II, due to extra error terms (such as those stemming from consensus errors) that are absent in [34]. These error components necessitate careful control and management, adding an additional layer of intricacy to our analysis
- **In-network RIP**:We feel that the new concept of in-network RIP holds more than mere technical significance; it represents an essential bridge between centralized and distributed worlds. It provides a tool for analyzing decentralized algorithms under *centralized* sample complexity, marking a considerable departure from [34]. Far from a minor contribution, it may pave the way for broader applications and insights in distributed computation
- **On the final stage of convergence:** Our new analysis offers convergence assurances within a specific time **window** post a defined point in time. This is an enhancement over [34], whose proofs only ensure convergence at a particular instance and not all subsequent iterations. This expansion of the convergence window (consequence of our proof) is sensitive in distributed settings, where synchronizing termination at a singular moment is not easily enforceable.
Please let us know if our assessment of the technical novelty is satisfactory. Happy to provide more details
We will also fix the typos raised in your questions. Thanks
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed response! I am happy to increase my rating. | Summary: This paper proposes a decentralized gradient algorithm for the matrix sensing problem via Burer-Monteiro type decomposition. A new concept of RIP termed in-network RIP is introduced for the proposed algorithm, which harnesses the RIP of the measurement operator and intertwines it with the network's connectivity. The paper demonstrates the effectiveness of the proposed algorithm by providing numerical simulations.
Strengths: - The paper provides a new decentralized gradient algorithm that solves the low-rank matrix sensing problem via Burer-Monteiro type decomposition. The study covers various aspects of the algorithm, including statistical guarantees, communication complexity, and sample complexity.
- The paper introduces a new concept of in-network RIP, which harnesses the RIP of the measurement operator and intertwines it with the network's connectivity to derive favorable attributes of the new, overarching network-wide measurement operator. This concept provides a new perspective on the analysis of decentralized matrix sensing.
Weaknesses: - Lack of decentralized applications. The paper only provides some applications for centralized matrix sensing problems and presents the shortcomings of common centralized applications. It would be helpful to list some practical applications for decentralized matrix sensing problems to motivate the proposed algorithms.
- The article is innovative but not too big. The algorithm is a simple combination of [25] and [45], but it's not straightforward to extend the analysis in [34] from the centralized case to the decentralized case. In-network RIP is proposed to close the gap.
- The condition of $\rho$ in (12) seems too strict. How many communication rounds per iteration are needed to satisfy (12)? It would be helpful to present the number of communication rounds per iteration in numerical experiments.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please explain the concept "generalization error". As far as I know, generalization error is a concept in machine learning that measures the performance of a trained model on new, unseen data. What does it mean in this paper?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their insightful comments and the overall positive assessment of our work. Our response to her/his comments/questions follows, starting from those listed under the `Weaknesses'.
-$\textbf{1.}$ Solving problems in a decentralized fashion is essential in situations in which high dimensionality data is gathered in multiple locations. In these situations gathering all data in a single machine can be prohibitive. However, with only partial access to the data the problem may not be solvable. A feasible approach in this circumstance is to opt for a master-client architecture (star topology with a server node connected to all the clients). However, this introduces a single point of failure and the server can easily become a communication bottleneck. Thus, in circumstances in which high-dimensional data is gathered in multiple/many locations, in terms of robustness and efficiency, decentralized computation over mesh networks is the preferred option, which has become a popular choice in several ML applications and computational architectures. Concrete matrix factorization/sensing applications in which high dimensional data is gathered in multiple locations naturally include the Netflix problem [Koren, Bell, Volinski 2019], for which schemes working on the master-client architecture have been proposed [Teflioudi, Makari, Gemulla, 2012] , and Seismic data interpolation [Aravkin, Kumar, Mansour, et. al. 2014]. Observe that in these applications data is both high dimensional and gathered in a decentralized fashion.
-$\textbf{2.}$ We agree with the Reviewer that the proposed scheme builds on [25]. However, it is important to remark that the algorithm [25] has been designed to deal with **convex** problem. A direct application of the same approach here would have suggested the algorithm in [25] applied to a **convex formulation** of the matrix sensing problem over the network (e.g., minimizing the nuclear norm subject to linear constraint, e.g., [3]). However, by doing so, the resulting distributed algorithm would have incurred in the unaffordable communication cost of $\mathcal{O}(d^2)$ elements per iteration. This motivates the application of a double-mixing based decentralized algorithm to the distributed **non-convex** problem in the form (2), for which there was no study in the literature. We refer the Reviewer to the reply to Reviewer XCka for details also on the *technical* novelties of our convergence analysis.
Referring to [45], the authors therein established that for unconstrained problems the algorithm in [25] and a variant of decentralized gradient descent are equivalent. Because the matrix sensing problem is indeed unconstrained, our proposed scheme can be cast into a variant of decentralized gradient descent (up to a change of variable), which we mentioned in our manuscript. However, since the analysis in [45] does not help to establish convergence guarantees in our setting, we believe that the aforementioned change of variable is not convenient for our purposed.
More generally, we believe that for the matrix sensing problem proposed algorithm is most suitable among other distributed algorithms in the literature applicable to nonconvex problems. This is for two reasons, (i) contrary to the problems handled in [25] and [45] $\bar{Z}^{\star}$ is a solution to the global problem and to each individual problem, which removes the need of more involved updates, like incorporating correction mechanisms of the local gradients (such as gradient tracking mechanisms), even if arbitrarily close to 0 precision guarantees are requested; (ii) gradient tracking or other correction mechanisms would complicate the analysis further, and as argued, to no benefit.
-$\textbf{3.}$
In order to obtain the number of communication rounds required to satisfy (12) with a mixing matrix/graph with a given connectivity $\bar{\rho}$ we require $K$ communication rounds where $K$ is such that
$$
\bar{\rho}^K \leq C \frac{\delta}{m^6 \kappa^4 r^{\star}}
$$
which is fulfilled by setting for some universal constant $c_0 > 0$
$$
K = \left\lceil \frac{c_0 \log\left( m \kappa r^{\star}/\delta \right)}{1-\bar{\rho}} \right\rceil.
$$
Observe that the dependence on the number of communication rounds is logarithmic with respect to $m$ $\kappa$ and $r^{\star},$ and consequently, scales beneficially with quantities that are moderate in size and is independent of potentially large quantities such as $d.$
$\textbf{Questions:}$ We agree with the reviewer that the terminology 'generalization error' is typically used in the ML community to evaluate the performance of a trained model (typically trained on the empirical risk) $U^t$ in terms of the population risk value at $U^t$. We referred to the quantity in Theorem 1 as generalization error because when particularized to the data model discussed in lines 193-197 the quantity on the RHS of (15) corresponds to the population risk value for the distributed problem.
We understand that this terminology can be misleading and we will change it in the final version of the paper.
Please let us know if further clarifications are needed.
---
Rebuttal Comment 1.1:
Title: After review
Comment: I appreciate the response, most of my concerns have been addressed. I have already scored based on this article's potential, so the score remains unchanged. | Summary: This paper studies the problem of decentralized low-rank matrix sensing. The paper presents novel theoretical results on the convergence and generalization of a standard decentralized learning algorithm. In particular, they provide convergence and generalization guarantees for decentralized gradient descent algorithm over mesh networks without any central server. To do so, the authors define a new "in-network" RIP, which serves as an equivalent quantity to the standard RIP used to establish similar results in the fully centralized setting. This quantity is shown to capture both the network connectivity and the measurement operator characteristics. Using this quantity, they establish an upper bound on the number of communication iterations and the resulting estimation error. The paper also provides simulation results that demonstrate the recoverability of the ground truth matrices using the algorithm.
Strengths: 1. The paper is very well-written and is easy to follow despite being dense.
2. The formulation of the in-network RIP is interesting and captures the complexity of the problem well.
3. The paper makes solid technical contributions to an interesting problem.
4. The interpretation of the theoretical results is much appreciated and helps the readability of the paper.
Weaknesses: 1. The figures in the experimental section could be improved in terms of presentation. Particularly, please enlarge the figure font sizes.
2. The authors could provide a refinement of equation 15 in terms of the sample complexity required to achieve epsilon relative error. The current error bound makes it hard to get a sense of how many measurements are required to get a desired error, when all other parameters are fixed.
3. In lines 95-98, the authors mention that distributed spectral methods cannot guaranty exact recovery and seem to indicate that the current algorithm can. However, it is not clear from the theory or the experiments if exact recovery is possible in the noiseless case even when the required RIP properties are obeyed. Can the authors provide further clarification on this?
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: Please see the weaknesses section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their insightful comments and the overall positive assessment of our work. We are happy the Reviewer liked it. We answer the Reviewer's concerns in order.
-$\textbf{1.}$ We will enlarge the font of the text in the figures and re scale them for visibility. We can change the color choices/grid choices to increase the figure visibility.
-$\textbf{2:}$ We agree that the result can be alternatively stated in terms of iterations towards an $\varepsilon>0$ accuracy. We opted for the format of presentation of the convergence of the same type as in [34] to facilitate the comparison, showing that our distributed algorithm achieves centralized statistical guarantees. Addressing the Reviewer's question, an $\varepsilon>0$ solution, $$ \frac{ \||U^{\hat{t}}(U^{\hat{t}})^\top-Z^{\star}\||_F}{\||Z^{\star}\||}\leq \varepsilon,$$ is achieved by choosing the size of the initialization $\mu$ as
$$\mu^2 \precsim \min \left\\{\frac{\varepsilon^2 r^2}{\kappa^4 d^2 (r - r^{\star})^2(r^{\star})^{4/21}} \\|\bar{X}\\|^2, \frac{\sqrt{rm}}{d \sqrt{d}\kappa^9} , \frac{\sqrt{r}}{d \sqrt{d}}\left(\kappa^2 \sqrt{\frac{d}{r}} \right)^{-96\kappa^2} \right\\} \quad \text{(A)}$$
Notice that this result holds for **any** sample size $N \geq c_1 d (r^{\star})\kappa^8$, which is **independent** on $\varepsilon$, This condition is required to guarantee the RIP condition (with $\delta \precsim \kappa^{-4}(r^{\star})^{-1/2}$).
To summarize, the condition on the sample size $N$ does not depend on the precision of the estimates the algorithm can achieve, which instead is controlled via the initialization size $\mu$ (affecting thus the convergence rate).
-$\textbf{3: }$ Contrary to our algorithm that achieves arbitrarily small $\varepsilon>0$ precision, under the $\delta$-RIP, for **any** $\delta \precsim \kappa^{-4}(r^{\star})^{-1/2}$ (resulting in a **fixed** $N \geq C_1 d (r^{\star})\kappa^8$ for **any** $\varepsilon>0$), this is not the case if spectral methods are employed to solve the matrix sensing problem. Such methods would provably estimate the eigenvalues and eigenvectors of the matrix $\mathcal{J}\mathcal{A}^*\mathcal{A}(Z^{\star})\mathcal{J}$ with arbitrary precision, which however do not match the eigenvalues and eigenvectors of $Z^{\star}$ with arbitrarily precision, unless $\delta$ becomes arbitrarily small (which cannot be for fixed $N$). More precisely, assuming the centralized mapping $\bar{\mathcal{A}}$ fulfills the $\delta$-RIP, with fixed $\delta\neq 0$ (thus fixed, proper $N$), the estimate $Y$ produced by such procedures satisfies [Chi, Lu, Chen 2019]
$$\\|Y-\bar{Z}^{\star}\\| \leq \delta \sqrt{r^{\star}}\\|\bar{Z}^{\star}\\|.$$ Clearly, for a given $\delta\neq 0$, the right hand side cannot be made arbitrarily small. As discussed in the comment 2 above, this is not the case for our algorithm.
Please let us know if further clarification are needed. We will update the paper to point out the above aspects.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications, I appreciate it. I do not have any further comments. | null | null | Rebuttal 1:
Rebuttal: We thank the Reviewers for their careful reading and insightful comments. We are glad that there is a consensus that the paper is well written and contains novel results that are of interest to audiences in optimization and machine learning. Our reply to their specific questions and key comments follow. In our replies we use the same notation as in the paper.
Pdf: | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Segment Any Point Cloud Sequences by Distilling Vision Foundation Models | Accept (spotlight) | Summary: This study represents the pioneering effort to utilize 2D vision foundation models for self-supervised representation learning on large-scale 3D point clouds. They introduce Seal, an innovative framework specifically designed to extract informative features from sequences of automotive point clouds. With its emphasis on scalability, consistency, and generalizability, Seal effectively captures semantic-aware spatial and temporal consistency, enabling the generation of highly valuable insights. Finally, they clearly demonstrate their superiority over previous state-of-the-art (SoTA) methods in both linear probing and fine-tuning for various downstream tasks.
Strengths: 1. The paper is well-written and easily comprehensible.
2. To my knowledge, this is the first attempt to utilize large-scale vision models for aiding 3D point cloud segmentation. The authors have also put in tremendous effort, completing this work within just a short span of one to two months.
3. Conducted extensive experiments and achieved satisfactory results on various segmentation datasets.
Weaknesses: 1. Strictly speaking, the approach in this paper is not considered unsupervised pretraining as it utilizes large models that rely on additional data. Most methods in Table 1, however, are unsupervised pretraining. Nevertheless, this distinction is not crucial, as the primary concern is to improve model performance, which is achieved through effective methods.
2. I believe that the approach presented in the paper has not fully distilled the knowledge of large 2D models. When using the complete dataset for both pretraining and fine-tuning, the information gain from distilling large models is not significant. Why do I say this? In the method employed in this paper, the large models primarily provide semantic supervision signals. If full semantic labels are given during fine-tuning, the information gain is limited. Although Table 2 shows significant improvements for some full dataset cases, this is mainly due to the use of semantic features in distillation rather than simple semantic pseudo-labels. The advantage of distilling large 2D models is that I can use more image-lidar pairs during pretraining. I believe that if additional image-lidar pairs are used, such as on the nuscenes dataset, there will be further improvements. I also hope to see similar experiments, such as pretraining with ten times the number of image-lidar pairs compared to nuscenes.
3. The method proposed in this paper is expected to be very general, so it should yield better results for other 3D perception tasks (e.g., 3D object detection).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The questions are all in the weaknesses section.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations are all in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 7qT5 for devoting time to this review and providing valuable comments.
---
> ***Q1:** "The approach in this paper is not considered unsupervised pretraining as it utilizes large models that rely on additional data."*
**A:** Thanks for the comment. We agree with the reviewer that this approach is not strictly unsupervised, since the vision foundation models (VFMs) used are trained with image annotations. We also believe that this distinction is not crucial as the use of VFMs is becoming prevailing and can be considered as, to a certain extent, *"off-the-shelf"*. To avoid any potential misunderstanding, we will revise the tone of the elaboration and try to avoid using *"unsupervised"* in the revised manuscript.
---
> ***Q2:** "I believe that if additional image-lidar pairs are used, there will be further improvements. I hope to see similar experiments, such as pretraining with ten times the number of image-lidar pairs compared to nuScenes."*
**A:** Thanks for asking this insightful question. Our response to this question is as follows:
- We agree with the reviewer that if sufficient semantic labels are given during the fine-tuning, the gain from model pretraining would become limited. In fact, this is also the case for many 2D pretraining tasks. Nevertheless, as verified in the experiments, the use of pretraining still exhibits advantages over random initialization, such as higher final performance, faster convergence rate, and better out-of-training-distribution robustness.
- The improvements over some full dataset cases in Table 2 are mainly credited to the use of the large-scale pretraining dataset, i.e., nuScenes, compared to these small-scale downstream datasets (SemanticPOSS, SemanticSTF, and DAPS-3D).
- An experiment on pretraining with much more image-LiDAR pairs is indeed desirable. Since nuScenes is already one of the largest LiDAR datasets, we seek the following two resources to consolidate the above observation:
- The *sweeps* data of nuScenes, which contains unannotated image-LiDAR pairs around nine times larger (around a total of 260k) than the used keyframes.
- The *Waymo Open* dataset, consisting of around a total of 150k image-LiDAR pairs from 64-beam LiDAR and five camera sensors. The point clouds are denser than that of nuScenes.
- Due to the time limit, we are only able to include these results in the revised manuscript. We will update our progress (if any) during the Author-Reviewer discussion section.
---
> ***Q3:** "The method proposed in this paper is expected to be very general, so it should yield better results for other 3D perception tasks (e.g., 3D object detection)."*
**A:** Thanks for your suggestion. Our response to this question is as follows:
- We agree with the reviewer that the pretrained weights should be general for related 3D perception tasks as well. We consolidate this statement by extending Seal (and existing LiDAR representation approaches [R1,R2]) to 3D panoptic segmentation -- a task that requires predicting both semantics and instance identities of LiDAR points.
- The results of different pretraining approaches pretrained on nuScenes (the same configuration as LiDAR semantic segmentation in the paper) and fine-tuned on the Panoptic nuScenes dataset [R3] under 5%, 10%, and 20% splits are as follows:
|Method|PQ (5%)|mIoU (5%)|PQ (10%)|mIoU (10%)|PQ (20%)|mIoU (20%)|
|-|:-:|:-:|:-:|:-:|:-:|:-:
|Random|35.60|45.56|44.10|54.80|51.29|62.29
|PPKT|47.73|53.16|53.63|58.95|56.25|64.42
|SLidR|44.68|52.92|51.44|60.00|55.64|65.67
|Seal|49.20|55.33|55.01|62.74|57.73|66.94
- We use the pretrained MinkUNet as the semantic extractor in the 3D panoptic segmentation pipeline. The instance extractor consists of an instance head, followed by a clustering step. This step leverages the semantic predictions to filter out *stuff* points, retaining only those belonging to *thing* instances, such as ‘pedestrian’, ‘car’, and ‘bicyclist’. Subsequently, the remaining points are subjected to clustering to identify the instances based on the features from the instance head. Similar to [R4], the mean shift clustering is employed for this purpose, where the bandwidth is set to 2.5, and the minimum number of points per cluster is set to 20.
- Following the conventional reporting of PQ (panoptic quality) and mIoU, we observe from the above table that Seal also shows superiority over existing LiDAR pretraining approaches in the 3D panoptic segmentation task. The results verify that the pretrained weights can be general across different 3D perception tasks.
- To further solidify this claim, we seek another extension to the 3D object detection task. Due to the time limit, we will include the results in the revised manuscript.
---
**References:**
- [R1] Y.-C. Liu, et al. "Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining," arXiv, 2021.
- [R2] C. Sautier, et al. "Image-to-LiDAR Self-Supervised Distillation for Autonomous Driving Data," CVPR, 2022.
- [R3] W. K. Fong, et al. "Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking," RA-L, 2022.
- [R4] F. Hong, et al. "LiDAR-based Panoptic Segmentation via Dynamic Shifting Network," CVPR, 2021.
---
Rebuttal Comment 1.1:
Title: Looking forward to discussion
Comment: Dear Reviewer 7qT5,
We sincerely thank you for devoting time to this review and providing valuable comments.
---
Based on the reviewers' comments, we have revised our manuscript to include the following changes:
- We have supplemented more details on the definition and generation of semantic superpixel and superpoint.
- We have added some remarks on the choice and motivation behind each design.
- We conducted a study on the effect of possible misalignment between the LiDAR and camera sensors.
- We have extended the Seal framework to other 3D perception tasks.
- We have polished the elaboration and clarified some typos and misunderstandings in the main submission.
---
We will actively participate in the Author-Reviewer discussion session. Please don't hesitate to let us know of any additional comments on the manuscript or the changes.
Best regards,
The Authors
---
Reply to Comment 1.1.1:
Title: Authors' Response to Reviewer 7qT5
Comment: We thank Reviewer 7qT5 for devoting time to this review and providing valuable comments.
---
Regarding Q2 (copied below) in the previous comment:
> *"I believe that if additional image-lidar pairs are used, there will be further improvements. I hope to see similar experiments, such as pretraining with ten times the number of image-lidar pairs compared to nuScenes."*
**A:** Thanks for the suggestion. We now present the follow-up updates for this question.
- We believe pretraining with much more image-LiDAR pairs is indeed desirable. We use the *sweeps data* of nuScenes for this kind of pretraining, which contains unannotated image-LiDAR pairs around nine times larger (a total of ~ 260k frames) than the used keyframe data (~ 29k frames).
- We use the baseline SLidR for pretraining in this update and will include more results for Seal in the revision. The pretraining results on nuScenes are shown in the following table:
| Method | LP | 1% | 5% | 10% | 25% | Full |
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
| Random | 8.10 | 30.30 | 47.84 | 56.15 | 65.48 | 74.66 |
| PointContrast | 21.90 | 32.50 | - | - | - | - |
| DepthContrast | 22.10 | 31.70 | - | - | - | - |
| PPKT | 35.90 | 37.80 | 53.74 | 60.25 | 67.14 | 74.52 |
| SLidR | 38.80 | 38.30 | 52.49 | 59.84 | 66.91 | 74.79 |
| **SLidR (w/ sweeps)** | 39.57 | 39.40 | 53.21 | 60.69 | 67.44 | 75.09 |
| ST-SLidR | 40.48 | 40.75 | 54.69 | 60.75 | 67.70 | 75.14 |
| Seal | 44.95 | 45.84 | 55.64 | 62.97 | 68.41 | 75.60 |
- We observe that using more image-LiDAR pairs during pretraining indeed improves the performance, for both linear probing and downstream fine-tuning tasks with different ratios.
- The improvements are not linearly scaled up with the amount of data used for pretraining. We conjecture that this is because the *sweeps data* of nuScenes are less diverse, since they were collected sequentially.
- Nevertheless, we believe that pretraining with more image-LiDAR pairs is promising and desirable; with the advent of datasets with larger scales, the proposed pretraining framework could become more powerful and can achieve better cross-dataset performance.
---
Last but not least, we thank Reviewer 7qT5 again for the time and effort devoted to this review. | Summary: This paper proposed a novel framework named Seal. The Seal distills VFMs into point clouds, enabling achieves efficient segmentation of various automotive point cloud sequences without requiring extensive annotation during the pre-training stage. It exhibits excellent performance across multiple datasets.
Strengths: The Seal distills the feature extraction capability of visual VFMs to point clouds. This paper promotes cross-modal representation learning through the proposed spatiotemporal consistency compared to previous works. The results obtained by The Seal on 11 different point cloud datasets demonstrate its effectiveness and superiority, highlighting its significant potential for 3D feature learning.
Weaknesses: The innovative contributions of the paper can be summarized into two points compared to previous works: 1. Distilling the representation learning capability from VFMs into point cloud data processing. 2. Introducing semantic superpoint temporal consistency to promote cross-modal learning. However, it is noted that the level of innovation in the paper might not be particularly strong.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. I have some questions regarding the semantic superpoint temporal consistency. Firstly, according to Figure 4, the segmentation results provided by VFMs are at the instance level, while this paper aims to obtain a pretraining model for semantic segmentation tasks. According to Equation 2, this paper treats instances of the same object as positive samples and the rest as negative samples. Therefore, there is a high possibility that negative samples may include instances of the same class. This could potentially hurt the segmentation task. On the other hand, how is the correspondence between moving objects in point cloud data at different time steps obtained?
2. The paper mentions the ability to achieve good performance even in cases where camera and LiDAR calibration information is unavailable (on page 6 line 199). However, without approximate intrinsic and extrinsic parameters, establishing the 2D-3D spatial correspondence would be challenging. Therefore, I am curious about the performance in such scenarios. However, it appears that the paper does not provide specific experiments or results for these particular cases.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The statement in the limitation section that the assumption of obtaining calibrated and synchronized data between LiDAR and cameras is overly idealistic contradicts the description in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 1xK9 for devoting time to this review and providing valuable comments.
---
> ***Q1:** "(i) The results provided by VFMs are at the instance level, while this paper aims to obtain a pretraining model for semantic segmentation tasks. (ii) The negative samples may include instances of the same class, which could potentially hurt the segmentation task. (ii) how is the correspondence between moving objects in point cloud data at different time steps obtained?"*
**A:** Thank you for the comment.
- For question (i): The reason for performing instance-level temporal consistency has two aspects:
- Unlike 2D scenarios with stable shape and texture cues, 3D objects and backgrounds have spatial coherence and complex point arrangements. Treating each point separately and applying point-level temporal consistency regularization could result in fragmented or noisy representations that don't capture the object's inherent structure.
- The class distribution of LiDAR scenes is severely long-tailed, with a substantial portion of points belonging to static classes like 'driveable surface,' 'manmade,' and 'vegetation.' For instance, in the nuScenes training set, approximately ~90.6% of points correspond to *static* classes, leaving only around ~9.4% for *dynamic* (instance) classes. Adequate representations for static classes can be acquired through fine-tuning.
- For question (ii): We agree that it is possible to include instances of the same class into negative samples. In fact, it could be a common problem (i.e., negative samples), but our method is more reliable than the prior works. For one thing, the semantic superpixels generated by VFMs are semantically richer than the conventional method and also provide more complete instances. This reduces, to a certain extent, the possibility of "self-conflict". Besides, our method utilizes much fewer segments (\~30) in a frame than the prior work (\~300). We verify through experiments that our approach outperforms the prior art.
- For question (iii): We use the following steps to associate the moving objects in different timestamps:
- *Coordinate transformation*. The point cloud $P^{t+n}$ at timestamp $t+n$ is first transformed into the coordinate system of $P^t$ at $t$. This alignment is established using the sensor extrinsic and intrinsic matrices. The transformed $P^{t+n}$ is concatenated with $P^t$, creating a composite point cloud $P$.
- *RANSAC segmentation*. The composite point cloud $P$ is divided into ground and non-ground point groups using the RANSAC algorithm, enabling differentiation between static and moving points.
- *HDBSCAN clustering.* The non-ground points are then used for clustering. This provides distinct segments representing different instances in the composite point cloud $P$.
- *Instance labeling.* Each segment is assigned a unique instance ID, where the ground plane points are labeled as background.
- Consequently, we separate $P$ into $P^{t+n}$ and $P^t$ via reverse coordinate transformation. The correspondence of the same moving object across different timestamps can thus be extracted using the instance IDs.
---
> ***Q2:** "I am curious about the performance in cases where camera and LiDAR calibration information is unavailable."*
**A:** Thanks for the comment.
- An accurate calibration could be important for establishing correct correspondences between LiDAR and cameras. In most cases, those sensors should be well-calibrated for an autonomous car. It is unusual if the calibration is completely unknown, but it is possible to be imprecise due to a lack of maintenance. Hence, we conduct the following experiments to validate the robustness of our method.
- For each point coordinate $(x_i, y_i, z_i)$ in a LiDAR point cloud, the corresponding pixel $(u_i, v_i)$ can be found by:
$$[u_i, v_i, 1]^T = \frac{1}{z_i} \cdot \Gamma_K\cdot \Gamma \cdot [x_i, y_i, z_i, 1]^T,$$
where $\Gamma \in \mathbb{R}^{4\times4}$ is the camera extrinsic matrix that consists of a rotation matrix and a translation matrix, and $\Gamma_K\in \mathbb{R}^{3\times4}$ is the camera intrinsic matrix. To simulate the misalignment between LiDAR and cameras, we add random noises to the camera extrinsic matrix $\Gamma$.
- **[Table 1]** Add $\pm$1% random noises to $\Gamma$:
|Method|1%|5%|10%|25%
|-|-|-|-|-
|Random|30.30|47.84|56.15|65.48
|PPKT|34.94|51.11|58.54|65.01
|SLidR|37.92|53.08|59.89|66.90
|Seal|45.23|55.71|62.62|68.13
- **[Table 2]** Add $\pm$5% random noises to $\Gamma$:
|Method|1%|5%|10%|25%
|-|-|-|-|-
|Random|30.30|47.84|56.15|65.48
|PPKT|33.69|51.40|58.00|64.11
|SLidR|38.00|52.36|60.01|64.10
|Seal|45.66|55.42|62.77|68.01
- **[Table 3]** Add $\pm$10% random noises to $\Gamma$:
|Method|1%|5%|10%|25%
|-|-|-|-|-
|Random|30.30|47.84|56.15|65.48
|PPKT|33.35|50.98|57.84|63.52
|SLidR|37.30|51.11|58.50|64.50
|Seal|44.80|54.45|61.80|68.29
- **[Table 4]** Performance w/o calibration noises (from Table 1 in the paper):
|Method|1%|5%|10%|25%
|-|-|-|-|-
|Random|30.30|47.84|56.15|65.48
|PPKT|37.80|53.74|60.25|67.14
|SLidR|38.30|52.49|59.84|66.91
|Seal|45.84|55.64|62.97|68.41
- From the above results we observe:
- The possible calibration errors in-between LiDAR and camera sensors will cause performance degradation for different pretraining approaches, i.e., Tables 1 to 3 compared to that of Table 4.
- The performance degradation for PPKT is especially prominent; we conjecture that this is because the point-wise consistency regularization of PPKT relies heavily on the calibration accuracy and encounters problems under misalignment.
- Both SLidR and Seal exhibit certain robustness; we believe the superpixel-level consistency is less sensitive to calibration perturbations. It is worth mentioning that Seal can maintain good performance under calibration error, since i) our VFM-assisted representation learning tends to be more robust; ii) we enforce also superpoint temporal consistency which not relies on the 2D-3D correspondence.
---
Rebuttal Comment 1.1:
Title: Looking forward to discussion
Comment: Dear Reviewer 1xK9,
We sincerely thank you for devoting time to this review and providing valuable comments.
---
Based on the reviewers' comments, we have revised our manuscript to include the following changes:
- We have supplemented more details on the definition and generation of semantic superpixel and superpoint.
- We have added some remarks on the choice and motivation behind each design.
- We conducted a study on the effect of possible misalignment between the LiDAR and camera sensors.
- We have extended the Seal framework to other 3D perception tasks.
- We have polished the elaboration and clarified some typos and misunderstandings in the main submission.
---
We will actively participate in the Author-Reviewer discussion session. Please don't hesitate to let us know of any additional comments on the manuscript or the changes.
Best regards,
The Authors | Summary: This paper introduces a novel framework, called *Seal*, that leverages VFMs for self-supervised representation learning on automotive point cloud sequences. The main idea is to leverage the 2D-3D correspondence between LiDAR and camera sensors and construct high-quality contrastive samples for cross-modal representation learning. The paper also proposes a superpoint temporal consistency regularization to enforce geometric stability across different timestamps. The paper evaluates the proposed framework on 11 different point cloud datasets and shows that it outperforms previous state-of-the-art methods in both linear probing and few-shot fine-tuning settings.
Strengths: **Originality:**
- the **first** work utilizing 2D VFMs for self-supervised representation learning on large-scale 3D point clouds
- a novel VFM-assisted contrastive learning objective that transfers the knowledge from the pretrained 2D to the 3D network at the semantic superpixel level
- a superpoint temporal consistency regularization
**Quality:**
- well-written and organized, with clear motivation, problem formulation, related work, methodology, experiments, conclusion and discussion & limitations
- extensive experiments on 11 different point cloud datasets and sufficient ablation studies
- sufficient mathematical formulations and derivations to support the proposed methods
- well-written and sufficient Supplementary Material with demo visualizations
**Clarity:**
- clear and easy to follow
- uses consistent notations and symbols throughout the text and equations
- helpful figures and tables to illustrate the main ideas and results
**Significance:**
- significant for both research and practice in 3D perception and representation learning
- addresses an important problem of segmenting diverse automotive point clouds in a scalable, consistent, and generalizable manner
Weaknesses: The datasets conducted on are all **outdoor** point cloud dataset for autonomous driving. Unless the authors conduct more experiments on **indoor** point cloud datasets, they'd better just state that "Segment Any **Automotive** Point Cloud Sequences ..." in their title.
Minior Issue:
- Undefined variable $L^{tmp}$ in Equ. (3)
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: In Table 3 (Full), it seems *Seal* performs not well in `Cross`, `Echo` and `Sensor` (especially `Sensor`, 39.85 vs 49.21). Could you please provide some possible reasons about that?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: Authors should be rewarded that the limitations are explicitly mentioned in *Discussion & Limitation*.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 3rsk for devoting time to this review and providing valuable comments.
---
> ***Q1:** "Unless the authors conduct more experiments on indoor point cloud datasets, they'd better just state that 'Segment Any Automotive Point Cloud Sequences …' in their title."*
**A:** Thanks for your suggestion. We will seek a more proper title description in the revision. In this work, we mainly targeted segmenting point cloud *sequences*, where such sequences are often in the form of *automotive point clouds*. Nevertheless, the proposed spatial and temporal consistency regularization objectives tend not constrained by current use cases. We seek further extension of this framework to other point cloud datasets as well, such as those from indoor scenes.
---
> ***Q2:** "Undefined variable $L^{tmp}$ in Equ. (3)."*
**A:** We thank the reviewer for pointing this typo out. We have addressed this issue in the revised paper. Here the symbol $\mathcal{L}^{tmp}$ means to denote the temporal consistency loss, which is used to encourage consistency between the mean representation of a segment across different timestamps.
---
> ***Q3:** "In Table 3, it seems Seal performs not well in Cross, Echo, and Sensor. Can you provide some possible reasons for that?"*
**A:** Thanks for your question. We would like to highlight that the performance discrepancy of Seal under the *'Crosstalk'*, *'Incomplete Echo'*, and *'Cross-Sensor'* scenarios in Table 3 is mainly attributed to the use of different LiDAR segmentation backbones.
- When comparing different pretraining approaches with *the same LiDAR segmentation backbone*, i.e., MinkUNet for the last four rows in Table 3, Seal exhibits superiority over all corruption types except *‘Incomplete Echo’*, which is slightly lower than that of SLidR (59.87 vs. 61.16).
- When comparing among backbones of different modalities, e.g., range view (CENet [R2]), raw points (WaffleIron [R3]), and point-voxel fusion (SPVCNN [R4]), the robustness against different corruption types becomes different. As been validated in Robo3D [R1], different LiDAR modalities often show diverse robustness effects. on nuScenes, the sparse voxel-based backbones like MinkUNet show sub-par performance under point cloud density changes, such as *'Crosstalk'* (extra noisy points within the mid-range areas in between two or multiple LiDAR sensors), *'Incomplete Echo'* (point miss detection for instances with dark colors), and *'Cross-Sensor'* (point clouds captured by sensors of different configurations).
- To further solidify this observation, we will explore the robustness of Seal and other pretraining approaches (e.g. PPKT and SLidR) using LiDAR segmentation backbones from other modalities. Due to the time limit, we are only able to supplement this result in the updated paper.
---
**References:**
- [R1] L. Kong, et al. “Robo3D: Towards Robust and Reliable 3D Perception against Corruptions.” arXiv, 2023.
- [R2] H.-X. Cheng, et al. “CENet: Toward Concise and Efficient LiDAR Semantic Segmentation for Autonomous Driving.” ICME, 2022.
- [R3] G. Puy, A. Boulch, and R. Marlet. “Using A Waffle Iron for Automotive Point Cloud Semantic Segmentation.” arXiv, 2023.
- [R4] H. Tang, et al. “Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution.” ECCV, 2020.
---
Rebuttal Comment 1.1:
Title: Looking forward to discussion
Comment: Dear Reviewer 3rsk,
We sincerely thank you for devoting time to this review and providing valuable comments.
---
Based on the reviewers' comments, we have revised our manuscript to include the following changes:
- We have supplemented more details on the definition and generation of semantic superpixel and superpoint.
- We have added some remarks on the choice and motivation behind each design.
- We conducted a study on the effect of possible misalignment between the LiDAR and camera sensors.
- We have extended the Seal framework to other 3D perception tasks.
- We have polished the elaboration and clarified some typos and misunderstandings in the main submission.
---
We will actively participate in the Author-Reviewer discussion session. Please don't hesitate to let us know of any additional comments on the manuscript or the changes.
Best regards,
The Authors | Summary: The manuscript presents Seal, a framework that leverages VFMs (Visual Foundation Models) to segment diverse point cloud sequences in autonomous driving scenarios. The proposed approach employs VFMs to initially segment superpixels in 2D camera images and subsequently projects them to 3D superpoints. Then, it incorporates two specific pre-training schemes: spatial contrastive learning, which involves training cross-modally from pre-trained 2D features to 3D representations, and temporal consistency regularization, which ensures the consistency of 3D point features across different timestamps. Extensive experimentation validates the effectiveness and robustness of the proposed framework.
Strengths: 1. The manuscript exhibits clear and concise writing, making it easy to comprehend. The mathematical explanations and figures are presented with high clarity, facilitating understanding.
2. Through extensive experiments, the study demonstrates the effectiveness and robustness of the proposed method, establishing its reliability.
3. The main contribution of the manuscript is the development of the framework that can serve as a valuable resource for future reference and practical implementation.
Weaknesses: Novelty: The novelty of the manuscript may be somewhat limited. While the authors claim that the main novelty lies in leveraging VFM models for superpixel set segmentation and introducing two specific pretraining schemes, it is worth noting that the use of VFM models for segmentation is already a widely discussed topic and is short of novelty. Additionally, the proposed pretraining schemes closely follow common cross-modal contrastive learning approaches. This may give the impression that the paper is riding on the popularity of VFM models without truly exploring their potential for feature-level distillation.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What is the parameterization of a superpixel? If a superpixel is selected as a representative pixel from an area with the same class in VFM outputs, how is the pixel specifically chosen?
2. What is the motivation behind enforcing the convergence of 3D point features to a single mean representation? In the 2D scenario, it is common for features at different pixels of the same object to be distinct, as they capture diverse parts and information about the object. Shouldn't this principle apply to the 3D case?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Besides the limitations the authors discuss in the manuscript, I've seen no additional specific limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer EPuy for devoting time to this review and providing valuable comments.
---
> ***Q1:** "The use of VFM models for segmentation is widely discussed. The proposed pretraining schemes closely follow common cross-modal contrastive learning approaches. The paper is riding on the popularity of VFM models without truly exploring their potential for feature-level distillation."*
**A:** We thank the reviewer for the comment. Our response to this question is as follows:
- Using vision foundation models (VFMs) for segmentation is among one of the hottest research topics in the computer vision community. Several recent attempts have been made to leverage VFMs (e.g. SAM [R1]) for tackling different vision tasks, such as remote sensing [R2], open-set object detection [R3], tracking [R4], image tagging [R5], medical imaging [R6], and many others.
- It is worth noting that, different from all the above pursuits, we present the first study on utilizing VFMs for self-supervised representation learning on large-scale 3D point clouds. We believe this work could enlighten follow-up research on designing more robust and scalable LiDAR segmentation frameworks.
- Seal adopts the camera-to-LiDAR contrastive objective [R7] during representation learning. The knowledge from VFMs is leveraged, in the form of semantic superpixels and superpoints, to help better enforce cross-modal consistency. We agree with the reviewer that exploring feature-level distillation is also a promising solution. However, there are several limitations to conducting such a distillation using existing VFMs:
- *Computational overhead*. Current VFMs are often constrained by large model sizes and are struggled with high latency. Conducting feature-level distillation would require significantly more computing resources compared to the original LiDAR segmentation task.
- *Domain gap and representational differences*. VFMs are primarily trained for specific tasks; the features learned might not be directly transferable to the LiDAR modality due to the inherent differences in sensor data and environmental characteristics.
- *Loss of modality-specific information*. When using VFMs' features for distillation, modality-specific information present in LiDAR data might be lost. VFMs' features are optimized for vision-related tasks and might not capture the intricate details and characteristics unique to LiDAR data.
---
> ***Q2:** "What is the parameterization of a superpixel? If a superpixel is selected as a representative pixel from an area with the same class in VFM outputs, how is the pixel specifically chosen?"*
**A:** Thanks for asking. In our framework, a superpixel denotes a group of semantically cohesive pixels segmented by VFMs, which is similar to the concept of a semantic mask. That is to say, a superpixel is not just one representative pixel from an area but a group of pixels that tend to belong to the same semantic class. Some examples of superpixels are shown in the first row of Figure 1, where each color represents a distinct superpixel generated by either SLIC or VFMs.
---
> ***Q3:** "What is the motivation behind enforcing the convergence of 3D point features to a single mean representation? In the 2D scenario, it is common for features at different pixels of the same object to be distinct, as they capture diverse parts and information about the object. Shouldn't this principle apply to the 3D case?"*
**A:** We thank the reviewer for asking this insightful question. Our response is as follows:
- The motivation for enforcing the convergence of 3D point features to a single mean representation is rooted in the distinct nature of LiDAR data. Unlike 2D scenarios, where pixel features at different locations of the same object can encapsulate various object parts and details, the complexities of LiDAR point clouds pose unique challenges:
- Different from 2D scenarios which contain relatively stable shape and texture cues, 3D objects and backgrounds are characterized by their spatial coherence and intricate arrangements of points. Treating each individual point as an independent entity might lead to fragmented or noisy representations that fail to capture the object's underlying structure.
- Here we enforce the convergence of 3D point features to a single mean representation to synthesize a cohesive depiction of the objects and backgrounds, while mitigating noise and variability intrinsic to the raw LiDAR data. This could facilitate, to a certain extent, a more stable and informative 3D representation.
- We validate through experiments that this single mean representation exhibits better performance than the point-wise contrastive approaches, such as PointContrast [R8] and PPKT [R9], under the linear probing, few-shot fine-tuning, and robustness evaluations (i.e. Tables 1 to 3 in the main text).
- What is more, the per-point contrastive learning objectives, which are more popular for indoor scenes captured by RGB-D cameras, are computationally more expensive than the mean representation for handling outdoor LiDAR scenes; the automotive point clouds are both larger in scale and richer in diversity than the indoor scenes.
---
**References:**
- [R1] Segment anything. ICCV, 2023.
- [R2] RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model. arXiv, 2023.
- [R3] Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection. arXiv, 2023.
- [R4] Segment and Track Anything. arXiv, 2023.
- [R5] Recognize Anything: A Strong Image Tagging Model. arXiv, 2023.
- [R6] Segment Anything in Medical Images. arXiv, 2023.
- [R7] Image-to-LiDAR Self-Supervised Distillation for Autonomous Driving Data. CVPR, 2022.
- [R8] PointContrast: Unsupervised Pre-training for 3D Point Cloud Understanding. ECCV, 2020.
- [R9] Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining. arXiv, 2021.
---
Rebuttal Comment 1.1:
Title: Looking forward to discussion
Comment: Dear Reviewer EPuy,
We sincerely thank you for devoting time to this review and providing valuable comments.
---
Based on the reviewers' comments, we have revised our manuscript to include the following changes:
- We have supplemented more details on the definition and generation of semantic superpixel and superpoint.
- We have added some remarks on the choice and motivation behind each design.
- We conducted a study on the effect of possible misalignment between the LiDAR and camera sensors.
- We have extended the Seal framework to other 3D perception tasks.
- We have polished the elaboration and clarified some typos and misunderstandings in the main submission.
---
We will actively participate in the Author-Reviewer discussion session. Please don't hesitate to let us know of any additional comments on the manuscript or the changes.
Best regards,
The Authors | null | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A Unified Fast Gradient Clipping Framework for DP-SGD | Accept (poster) | Summary: The paper provides a unification of ad-hoc analysis and interpretations of the ghost clipping algorithm under a single framework. It also shows that for certain operations, such as fully-connected and embedding layer computations, further improvements to the runtime and storage costs of existing decompositions can be deduced using certain components of the proposed framework.
Strengths: - **Originality**: The paper proposes a novel framework for fast gradient clipping used in DP-SGD. This is a significant theoretical contribution to the field of privacy of machine learning.
- **Quality**: The paper is well-organized. A thorough explanation of the proposed framework, including mathematical derivations and experimental results is provided.
- **Clarity**: The paper is easy to follow and understand, even for readers who are not experts in the field. The authors provide clear explanations of the background, motivations, concepts, and techniques used, and the experimental results are presented in a concise and understandable manner.
- **Significance**: The paper has significant implications for the specific field of privacy of machine learning. It provides a unified perspective on the ghost clipping algorithm in DP-SGD.
Weaknesses: The paper focuses only on a specific technique of gradient clipping, i.e., ghost clipping. Whether this ghost clipping lies at the heart of the big field of privacy of deep learning or not is unclear to me.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How can the proposed framework be generalized beyond the ghost-clipping algorithm?
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not discussed by the authors. Perhaps the possible relation or extension to other clipping algorithms can be mentioned in the final as a separate concluding section which is now missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our manuscript. Below, you find some responses to some of your questions and concerns.
> The paper focuses only on a specific technique of gradient clipping, i.e., ghost clipping. Whether this ghost clipping lies at the heart of the big field of privacy of deep learning or not is unclear to me.
As far as we are aware, there are no other techniques in the literature for quickly clipping gradients in neural networks aside from ghost clipping.
> How can the proposed framework be generalized beyond the ghost-clipping algorithm?
I am unsure what the reviewer is asking here. The ghost-clipping technique is a special instance of our proposed framework, and not the other way around.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your clarification. You may ignore my question in the "Questions" because I have misunderstood some perspectives. I keep my score as "Weak Accept" after reading other reviewers' comments. | Summary: This paper provides a unified framework for efficiently computing the gradient norms of individual examples for a wide range of neural network architectures, which significantly decreases runtime and storage costs for implementing DP-SGD.
Strengths: 1. The considered problem is important: Computing per-example gradients is a computation bottleneck for previous implementations of DP-SGD.
2. Theoretical and empirical improvements over baselines are clearly presented and significant. Tables 1 and 2 summarize the time and storage complexity for previous work and the proposed algorithm, and Figures 1 and 2 summarize the observed improvements in experiments. The theoretical and the empirical improvements are both convincing.
3. The presentation is rigorous and clear.
Weaknesses: 1. The experimental evaluation does not provide a big-picture idea of the end-to-end savings provided by the proposed framework. Figures 1 and 2 exhibit the runtime/storage costs only for the gradient computation operation, and it does not show how the overall costs are affected. If gradient computation accounts for a small portion of the overall runtime, then the overall time savings will not be significant. As it stands, the experimental results do not give a concrete idea for the overall savings in runtime, which is the metric of interest for practitioners.
2. It would be nice to see an empirical evaluation for the more popular layer types mentioned in the text, e.g. convolution and attention.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What is the overall reduction in runtime/storage cost provided by the proposed framework?
2. How does the empirical performance of the proposed framework compare to that of ghost clipping for convolution layers and attention layers?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There is no discussion of societal impact beyond the underlying (implied) importance of user privacy. However, I don't think any further discussion of societal impact is necessary for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work. We hope the below comments will address the issues you brought forth.
> The experimental evaluation does not provide a big-picture idea of the end-to-end savings provided by the proposed framework. Figures 1 and 2 exhibit the runtime/storage costs only for the gradient computation operation, and it does not show how the overall costs are affected. If gradient computation accounts for a small portion of the overall runtime, then the overall time savings will not be significant. As it stands, the experimental results do not give a concrete idea for the overall savings in runtime, which is the metric of interest for practitioners.
For feed forward neural networks the forward and backward steps use a similar amount of compute and memory, thus highlighting the results of decreased gradient computation time seemed reasonable to us. However, in order to provide a more complete picture we can add end-to-end runtime analysis of our technique compared to the naive clipping technique.
> It would be nice to see an empirical evaluation for the more popular layer types mentioned in the text, e.g. convolution and attention.
As the convolution operator is equivalent to a fully-connected layer (see [1] below) and the parameter transforms of multihead attention layers are merely matrix multiplications (see our Appendix E.1), we expect their empirical performance to be similar to what is shown for fully connected layers (see Figures 1 and 2).
[1] Bu, Z., Mao, J., & Xu, S. (2022). Scalable and efficient training of large convolutional neural networks with differential privacy. Advances in Neural Information Processing Systems, 35, 38305-38318.
> What is the overall reduction in runtime/storage cost provided by the proposed framework?
In our experiments, the large majority of the work performed in the algorithm was in the computation of the gradient norms. Hence, the graphs in Figures 1 and 2 are also indicative of the overall cost reduction provided by our framework.
> How does the empirical performance of the proposed framework compare to that of ghost clipping for convolution layers and attention layers?
See our response above about convolution and attention layers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I agree with you that the time taken for gradient computation is likely to be representative of the overall computation time, but the accuracy of this representation is not certain until it is actually measured. It would still be nice to see a comparison of the overall running time. I feel similarly about a comparison for the other layer types. It seems likely that the other layer types will behave similarly as the fully connected layer, but we cannot know for sure until it is measured. Still, these are small details and I am satisfied with the quality of the paper, so I will maintain by recommendation for acceptance. | Summary: This paper unifies a framework that enables us to apply ghost clipping algorithms to new layers of neural networks, apart from fully connected, feed-forward layers. In particular, the paper discusses efficient implementations of computing norms for running DP-SGD to train neural networks.
Strengths: The contributions of this paper are as follows:
1. It shows how to utilize the proposed framework to apply ghost clipping techniques to new layers for DP-SGD on training neural networks. Three main examples are included to discuss how to implement efficient algorithms to compute the gradient norm and to run DP-SGD.
2. Experimental results evaluate the effectiveness of their proposed framework over the classical ghost clipping framework in run time and memory requirements.
Weaknesses: Literature on the importance of DP-SGD and clipping operators is lacking in this paper. It is better to include them in Section 3 (Previous work) or in the introduction section to motivate the study of this paper in a broader impact.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive impression of our work! Below you will find our comments addressing your main concern.
> Literature on the importance of DP-SGD and clipping operators is lacking in this paper. It is better to include them in Section 3 (Previous work) or in the introduction section to motivate the study of this paper in a broader impact.
In the second paragraph of the “Introduction” section (Section 1), we discussed the importance of DP-SGD, motivated the use of gradient clipping, and remarked on the importance of private training. Moreover, in the third paragraph of the “Introduction” section and the “Previous work” section (Section 3), we discussed why the process of clipping is hard to scale to large models. If you have any specific pointers related to the importance of DP-SGD we would be happy to add them as additional references. | Summary: The paper studies the memory/time complexity of the per-sample gradients norm computation step in the DP-SGD algorithm. The authors present a general theoretical framework that generalizes the idea of ghost norm computation (computing the norm of the persample gradients without having to store the gradients), and can be applied to arbitrary layers of a DNN (if an ad hoc decomposition can be found). The authors show early experiments on the benefits of their methods compared to previous ghost clipping techniques.
Strengths: originality: - Computing persamble gradient norms is a real bottleneck when training DNNs with DP-SGD. Typically, the Opacus library (Pytorch for privacy) stores the gradients per sample before computing the norms, which can be memory intensive, especially for large architectures. Other methods have been proposed to compute the norm without storing the gradients (ghost norm), but no general framework has been proposed yet. This paper gives a clear problem formulation and a general theoretical framework to approach this problem. I was very happy to see a theoretical work that tries to tackles this issue.
quality: The math are correct
clarity: ok
significance: using the theory of linear operator to improve time and space complexity of DP-SGD is good novelty that could benefit the community
Weaknesses: - Typos: l108 “authors”
- Overall, the experiments are very insufficient. They are performed with batches of size 1, for which the vanilla per-sample gradient storage method has a better running time, but with which it is never compared. The authors should investigate in detail how their method would work with GPUs to see how it would be beneficial in practice: the complexity bounds given are linear in the batch size, but in practice we benefit greatly from parallel computation/optimized matrix multiplication.
- Experiments should be performed on whole (even small) networks used in practice, with batches of size greater than one. In Li et al (2022), they compare ghost norm to opacus when training a GPT2-like architecture and find similar runtimes; a similar comparison could be done here (even with much smaller versions of GPT2 when computational power is limited).
- It is not the same abstract in OpenReview and in the article. The first states that savings can be as high as 150x: this could be the case in very specific situations chosen for the experiments, but the authors do not elaborate on why these choices are realistic in practice.
- The section on related work is sparse, with very few papers cited (no mention of why gradients need to be clipped, no mention of the whole deep learning literature where this is a real bottleneck (Opacus))
- No code for reproduction
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - Typically Opacus (storing per sample gradients) or ghost norm will be 2x slower than non DP training. How can the proposed method be 150x faster? That would mean that the proposed method is faster than non DP training?
- Why does the green curve start decreasing at some point in the memory figure for the embedding layer?
- For the embedding layer, the authors state that q=1 in practice but show memory/runtime for q = {1000, 2000, …, 10000}. Can you please clarify on these choices for comparison?
- The paper states that it can be applied to arbitrary layers, but the examples are linear layers (which were already tackled in ghost norm previous work). What are the conditions for the existence of the decomposition in proposition 4.1? What do the authors have in mind by “efficient manner” (line 156)? When is that possible?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The paper does not include a limitation section.
Suggestions for improvement would be to perform a rigorous empirical study of the proposed method, with entire architectures and using micro batches, and see if can easily used with an existing DL library (pytorch, JAX...).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments towards improving the quality of our manuscript. We hope that the comments and considerations below will help improve your overall opinion of our work.
> Typos: l108 “authors”
This will be fixed in the revision.
> Overall, the experiments are very insufficient...
We will add additional experiments that vary the batch size as well as compare GPU costs against CPU costs. As a preview of the results, here are the fully-connected memory and runtime profiles of ghost clipping (GC) vs. our method (A) for varying bias dimensions $m$ and batch sizes $|B|$.
**Runtimes (in seconds)**
| m | GC (\|B\|=250) | A (\|B\|=250) | GC (\|B\|=500) | A (\|B\|=500) | GC (\|B\|=1000) | A (\|B\|=1000) |
|-|-|-|-|-|-|-|
| 16 | 0.224 | 0.061 | 0.369 | 0.094 | 0.865 | 0.154 |
| 32 | 0.230 | 0.064 | 0.370 | 0.089 | 0.907 | 0.156 |
| 64 | 0.239 | 0.063 | 0.430 | 0.092 | 0.850 | 0.154 |
| 128 | 0.285 | 0.061 | 0.562 | 0.090 | 1.354 | 0.156 |
**Peak Memory (in MB)**
| m | GC (\|B\|=250) | A (\|B\|=250) | GC (\|B\|=500) | A (\|B\|=500) | GC (\|B\|=1000) | A (\|B\|=1000) |
|-|-|-|-|-|-|-|
| 16 | 1705 | 190 | 3410 | 380 |7569 | 761 |
| 32 | 1717 | 190 | 3433 | 380 | 7616 | 761 |
| 64 | 1740 | 190 | 3480 | 380 | 7710 | 761 |
| 128 | 1787 | 190 | 3574 | 380 | 7898 | 761 |
> Experiments should be performed on whole (even small) networks ...
We will add additional experiments on a small BERT model to see how our framework performs in practice. We do want to point out that Li et al (2022) show that some models couldn’t be trained with Opacus simply because the per example clipping made GPUs run out of memory. We expect our results to be very similar in nature.
> ...the authors do not elaborate on why these choices are realistic in practice...
The numerical experiments in Section 6 were specifically chosen to demonstrate the better scalability of our implementation compared to ghost clipping and, hence, the parameter choices may not be typically observed in other literature.
We will add additional end-to-end experiments that utilize more practical parameter choices.
> The section on related work is sparse...
The importance of clipping gradients is discussed in the second paragraph of the “Introduction” section (Section 1). Moreover, the steps in Algorithm 1 clearly show that DP-SGD cannot be implemented without gradient clipping (hence, its importance).
We will add additional references to works where the clipping step is a serious bottleneck to training DP models. Also, we would be happy to consider specific works the referee has in mind.
> No code for reproduction
Code will be made publicly available if the paper is accepted.
> Typically Opacus (storing per sample gradients) ... will be 2x slower than non DP training...
As far as we are aware, there do not appear to be any studies that compare Opacus / ghost norm / non-DP gradient norm computations on individual layers. Hence, the “2x” slower performance is not informative of the layer-specific efficiency of ghost norm / Opacus vs non-DP for different choices of layer parameters.
Consequently, the 150x improvement of our framework is due to the large layer (or query) dimension sizes that are tested in our experiments. A simple analogy (with rough guesses) is that BubbleSort is only 2x slower than QuickSort when the number of elements $n$ is less than 20, but can be more than 1000x slower when $n$ is greater than 5000.
In view of the above arguments, we do not agree that our experiments imply our proposed framework is faster than non-DP training. In fact, our framework is strictly slower because it performs one more backward and forward pass of the model compared to non-DP training (see lines 117, 137, 140, 169, and 175 in the original submission for references of this).
> Why does the green curve start decreasing at some point in the memory figure for the embedding layer?
We believe it is due to the randomness of the generated embedding indices.
> For the embedding layer, the authors state that q=1 in practice...
For added clarification, it is stated (in Subsection 5.2) that a commonly used case is q=1. In practice, models may also use embedding layers with q >> 1. For instance, transformer models require q to be the size of the sentence and personalization models, such as those used in online advertising, may encode arbitrary information about a webpage’s context which can result in multiple queries to a model.
> The paper states that it can be applied to arbitrary layers, but the examples are linear layers...
We respectfully disagree about the first statement. Subsection 5.3 gives the analysis of a nonlinear layer, and the analyses on ghost clipping from prior works did not consider the sparsity of the related linear operators (this is why we are able to make a drastic improvement for embedding layers in particular).
In general, the decomposition in Proposition 4.1 is feasible when $\phi_x$ can be decomposed into the composition of at least two Fréchet differentiable functions.
Examples are given in Section 5 about what “efficient manner” means (on line 156). In short, this is the case when the operator $A^*$ in Proposition 4.1 admits a low-rank decomposition or is sparse. When exactly this is the case will depend on the characteristics of the layer function $\phi_x$ (in Proposition 4.1) and the chosen decomposition $\phi_x = \psi_x \circ Z_x$ (see the examples in Section 5 and Appendix E for some details).
> The paper does not include a limitation section...
We will add a section discussing the limitations of our framework. Moreover, we will add experiments that compare our method on more complex model architectures.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you very much for your detailed response.
- It would indeed be very insightful to get (empirical) results of runtime/memory for a network + HP commonly used in practice.
- There is also the recent work of Bu et al (ICML 2023): Differentially Private Optimization on Large Model at Small cost, which is also be improving on the original Ghost norm. It could be interesting to compare to them too, what do you think?
- "We will add additional references to works where the clipping step is a serious bottleneck to training DP models. Also, we would be happy to consider specific works the referee has in mind"
Training with DP-SGD is more computationally intensive than non private training for two main reasons: the first one is gradient clipping, which the current paper is tackling, and the other reason is that to attain good privacy utility tradeoff, it is better to use "mega batches": see De et al (2022): Unlocking High-Accuracy Differentially Private Image Classification through Scale, or Sander et al (2023): TAN Without a Burn: Scaling Laws of DP-SGD, or Yu et al (2023): ViP: A Differentially Private Foundation Model for Computer Vision, and the ghost norm paper that you are already citing, to cite only a few.
Overall compute is one of the main bottlenecks to improve the performance of large architectures trained with DP-SGD , which puts your work as possibly very impactful. I am raising my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback and additional references! We will definitely try to incorporate them in the revision.
> There is also the recent work of Bu et al (ICML 2023): Differentially Private Optimization on Large Model at Small cost, which is also be improving on the original Ghost norm. It could be interesting to compare to them too, what do you think?
If we understand the work of (Bu et. al., 2023) correctly (specifically Appendix D.2), the improvement over Ghost Clipping seems to be from removing the need to store/compute certain extraneous parameter gradients during the backward pass of DP-SGD. While our implementation already does this removal in theory (see how the steps in Algorithm 2 on page 5 are described) and in our numerical experiments, we will (i) make this fact explicitly known in the revision and (ii) duly cite the work of (Bu et. al., 2023) | null | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This work unifies the analysis of the ghost clipping algorithm, which was previously analyzed for specific architectures only. The proposed framework extends previous results to a wide class of network architectures including new applications and provides asymptotically faster computation times than the previous case-by-case approaches. Some numerical simulations confirm the effectiveness of the proposed algorithm demonstrating faster computation time in practice.
Strengths: The paper is nicely structured and well motivated. The considered family of functions (NN architectures) is quite general and the proposed approach introduced by Proposition 4.1 seems to be very principled. Despite of the generality, the strategy proposed in Proposition 4.1 can be implemented for a number of important special classes of functions such as fully-connected networks, embedding layers and low rank approximation layers. The authors demonstrate improved runtime and storage costs for this algorithm in these three special cases.
Weaknesses: -- The main motivation of the work is that when the batch sizes of DP-SGD is large, the runtime and storage can be large. Why this problem cannot be simply solved by using parallel computing (assigning different batches to different workers)? If this is one possible approach, I believe this should be mentioned and discussed in the related work.
-- It seems that the assumptions on the loss function and the layers $\phi_x$ are not explicitly stated. It is unclear to me if it is sufficient to have all $\phi$ and the loss function to be Frechet differentiable or any additional technical assumptions are needed for Proposition 4.1. Why the sub gradient of $Z_x$ might not be unique while $\nabla h$ and $g$ in this proposition are unique?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Why in Tables 1 and 2 some entries come with $\Theta(\cdot)$ and some $O(\cdot)$? Are there lower bounds for storage and runtime costs for these settings?
- Does equation (3) hold for the last layer only?
- Typo in line 54.
---
Update: The authors addressed my concerns and I would like to increase my score from 5 to 6 to further support the acceptance.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to thoroughly review our manuscript. We hope that our responses below will help improve your impression of our work.
> The main motivation of the work is that when the batch sizes of DP-SGD is large, the runtime and storage can be large. Why this problem cannot be simply solved by using parallel computing (assigning different batches to different workers)? If this is one possible approach, I believe this should be mentioned and discussed in the related work.
In short, parallel computing does not solve the scalability issues of naive clipping compared to our approach, especially when certain dimensions of the problem (aside from batch size) are large (on the order of thousands or tens of thousands). This issue is explicitly made clear in Tables 1 and 2, where our proposed framework scales much better with other problem constants (e.g., $r$, $q$, $p$ and $d$) compared to the naive and ghost clipping approaches.
Also note that all three approaches considered in Tables 1 and 2 will (asymptotically) have the same benefits from switching to a parallel computing framework. Specifically, if batches were split among $s$ workers in parallel, the constant $|B|$ in Tables 1 and 2 would change to $|B|/s$ for all algorithms.
> It seems that the assumptions on the loss function and the layers $\phi_x$ are not explicitly stated. It is unclear to me if it is sufficient to have all $\phi$ and the loss function to be Frechet differentiable or any additional technical assumptions are needed for Proposition 4.1. Why the sub gradient of $Z_x$ might not be unique while $\nabla h$ and $g$ in this proposition are unique?
To be more precise, we will add some explicit assumptions just before Subsection 4.1 and make the differentiability of the key functions in Proposition 4.1 more clear. Specifically, we will make the explicit assumption that
* $\ell(x,\cdot)$, $\phi_1(\cdot, x)$, \ldots, $\phi_k(\cdot, x)$ are Fréchet differentiable;
* each function $\phi_i(\cdot, x)$ can be decomposed into the composition of at least two Fréchet differentiable functions;
* the functions $\ell_x$, $\phi_x$, $\psi_x$, and $Z_x$ are Fréchet differentiable.
In the revision, we will assume $Z_x$ is differentiable so that its (sub)gradient is unique.
As an aside, note that the choice of $Z_x$ itself is not unique. Indeed, if $\phi_x = \psi_x \circ Z_x$ then for any $\alpha > 0$ the functions $Z_x’=\alpha Z_x$ and $\psi_x’ = \psi_x / \alpha$ also satisfy $\phi_x = \psi_x’ \circ Z_x’$.
> Why in Tables 1 and 2 some entries come with $\Theta(\cdot)$ and some $O(\cdot)$? Are there lower bounds for storage and runtime costs for these settings?
$\Theta(\cdot)$ is used for computations where we know the exact (up to universal constants) amount of storage/runtime needed to compute gradient norms with respect to certain dimensions. $O(\cdot)$ is used when the storage/runtime may depend on other features of the problem that are not dimensions (e.g., sparsity of the queries for embeddings layers). For these cases, an upper bound is more appropriate.
As far as we know, there are no informative lower bounds for computing gradient norms. Naively, though, there is the $\Omega(|B|)$ lower for storing/computing the scalars corresponding to each example’s gradient norm in a batch.
> Does equation (3) hold for the last layer only?
Relation (3) may hold for any layer in a neural network. However, the functions $\phi_x$ between layers may differ. Note that this is why the corresponding $\phi_x$ functions in Algorithm 2, specifically $\{\psi_{x,i} \circ Z_{x,i}\}$, are indexed by the layer index $i$.
> Typo in line 54.
This will be fixed in the revision. | null | null | null | null | null | null |
GloptiNets: Scalable Non-Convex Optimization with Certificates | Accept (spotlight) | Summary: This paper focuses on non-convex global optimization on the hypercube.
The approach is built on top of the framework [7] and relies on non-negativity certificates that are not only restricted to non-negative polynomials since it is applicable to any function with computable Fourier coefficients.
The methodology is flexible in the sense that one can increase the certificate of the obtained certificate by relying on the so-called k-SOS method. The approach is naturally compatible with GPU computation thanks to a modular Bessel kernel, which is stable by product. The resulting optimization method, called GloptiNets, can be handled with automatic differentiation. Afterwards the algorithm is benchmarked against the TSSOS library designed for sparse polynomial optimization on six problems of dimension 3 or 4. The performance are similar or less good when the number of coefficients is small, and certificates are obtained with GloptiNets on examples for which the SDP solver used by TSSOS runs out of memory.
Strengths: The methodology allows a certain flexibility as it is not restricted to objective polynomial functions and does not require any assumptions on the structure (e.g., sparsity, symmetry) apart from the hypercube constraint.
One can also increase the accuracy of the certificate by considering bigger nets.
Weaknesses: Some technical explanation are not easy to follow (see the questions below).
The title of the paper is slightly misleading, I suggest to emphasize that the optimization is performed on the hypercube.
The paper only provides numerical comparisons for low dimensions (the number of variables is either 3 or 4) and the corresponding benchmarks are cooked by hand (called synthetic in the conclusion), thus do not correspond to any concrete optimization problem. Even if it is promising, I believe that this methodology in its current state is not yet convincing for optimizers interested in either academic or industrial applications.
In addition, the accuracy of the obtained certificate is low compared to interior-point methods. What would be the amount of computational time required to obtain a similar accuracy with GN? On Figure 2, the number of parameters required to go from 0.1 to 0.02 increases rather quickly. This can also be concluded from the numbers in Table 1. For problem 3, it is approximately 3 times slower to double the accuracy of the resulting certificate (going from 1.3e-2 to 6.9e-3). So one could believe that it would take a rather considerable amount of time to reach an accuracy of 1e-8.
I believe that the statement "TSSOS is not guaranteed to converge to f* but executes faster, and thus is on an equal footing with GloptiNets" is quite misleading for two reasons:
1) The efficiency of TSSOS increases when the optimization problem involves sparser polynomials (with low n), which is not the case here for problems 3 and 6.
2) The efficiency of TSSOS relies on the one of the tool used to solve the SDP. An alternative SDP solver such as COSMO would be certainly much more efficient but would also yield certificates with lower accuracy.
Similarly the sentence "poly-SoS methods (whose complexity scales dramatically and in a rigid way with dimension and degree of the polynomial)" is misleading. For a given certificate accuracy, there is no established estimate showing that the complexity of poly-SOS methods is higher.
So I strongly suggest to the authors to either reformulate this statement or to justify it with rigorously established statements.
The rigidity statement is also not really accurate as any structure-exploiting variant of the Moment-SOS hierarchy comes with a specific computational cost and convergence rates.
There is a potentially high source of conservativeness for the resulting lower bound because of the difference between the L-infinity and Fourier norms.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: l123: each z_i is a vector of T^d so shouldn't we have Z \subset T^d?
l124: which norm is used?
After l154: the matrix R suddenly (re-)introduced in Equation (7) is rather confusing. Where does this matrix come from? I guess that it is related to R from definition (1).
Figure 1 is not mentioned in the text and I could not understand it. The F norm color is displayed in orange, not in red.
(8) The expectation notation differs from (7) as there is no sub-index anymore.
In Figure 2, I cannot see any red.
Could one obtain estimates of the difference between the L-infinity and Fourier norms? Is there a specific class of examples where this difference is significant?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: There is a clear section dedicated to limitations with an explicit list:
- an extensive hyper-parameter search is required
- the structure is not exploited
- the setting is restricted to the hypercube
- the benchmarks are somehow artificial
To be more convincing, the framework should be adapted to take into account the structure of real-world problems, e.g., the AC-OPF problem mentioned by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## About the quality of the certificate
We refer to the answer to all reviewers where this matter is discussed extensively with new experiments.
## Comparison with TSSOS
> The efficiency of TSSOS increases when the optimization problem involves sparser polynomials (with low n), which is not the case here for problems 3 and 6.
This is indeed the point we make. We show that TSSOS is very efficient when the number of coefficients in the polynomials is small, resembling a sparse polynomial (xps 1, 2, 4, 5). However, we show that the added value of our algorithm lies on certifying dense problems, of which xps 3 and 6 are instances of.
We recall this limitation in the "Limitations" section, "our model is not competitive on problems which exhibit some algebraic structure, as for instance term sparsity" (l. 306).
> The efficiency of TSSOS relies on the one of the tool used to solve the SDP. An alternative SDP solver such as COSMO would be certainly much more efficient but would also yield certificates with lower accuracy.
We thank the reviewer for pointing this out. We will add it to the main text.
However, this does not change the conclusion of the experiments: polynomial hierarchies require forming the moment matrix, whose size scales exponentially in the number of coefficients or in the problem's dimension. Term and correlative sparsity leveraged by TSSOS mitigate this issue on sparse polynomials by turning a high-dimensional problem into multiple small-dimensional problems. However, this is not applicable in the "dense polynomials" settings we consider in xps 3 and 6. For these ones, allocating enough memory to assemble the moment matrix is not possible, so using COSMO instead of MOSEK will not change anything in those cases.
## Quality of the $F$-norm
This is indeed a source of conservatism which is hard to quantify. Note however that this is the smallest norm for the norms given by summable, positive operators (Lemma 3 in [1]).
[1] Blake Woodworth, Francis Bach, and Alessandro Rudi. Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares. In Proceedings of Thirty Fifth Conference on Learning Theory, pages 4620–4642. PMLR, June 2022.
## Complexity of TSSOS
We will reformulate this point:
- Given a polynomial $f$ and a tolerance $\epsilon$, neither poly-SoS nor GloptiNets provides a priori guarantees on the time complexity for certifying the positivity of $f$ up to an error $\epsilon$.
- However, GloptiNets provides a bound on the complexity of the algorithm (as choosing the model size and the training time is left to the user), whereas poly-SoS, in their standard variant, requires a relaxation order of at least $p_{\min} = ⌈deg(f)/2⌉$, which requires assembling a (potentially block-diagonal) moment matrix of size $O(\binom{d + p_{\min}}{d}) = O(p^d)$.
## Answers to questions
We thank the reviewers for spotting the following typos we will fix in the main text.
l123: Indeed, this is $\mathbf{Z} \subset \mathbb{T}^d$ rather than $\mathbf{Z} \in \mathbb{T}^d$
l124: This is the Euclidean norm, $\lVert R K_\mathbf{Z}(\mathbf{x}) \rVert_2^2$
After l154: $g$ is a K-SoS model introduced in the Definition 1 above.
Fig. 1 shows on a random example the possible discrepancy between the $L_\infty$ norm (what we want to estimate in blue), the $F$-norm (what is tractable to estimate, in orange) and its tractable-to-evaluate probabilistic estimate, in green. This figure is updated in the document attached to the answer to all reviewers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses, I will retain my score. | Summary: This paper presents GloptiNets, a method to bound the suboptimality of a given candidate solution to the optimization of a smooth (nonconvex) function on the hypercube.
Nonconvex optimization with optimality certificates is difficult. Oftentimes it is relatively easy (fast) to compute a candidate solution, but it is much more difficult (computationally expensive) to bound the suboptimality of the solution. A popular way to provide optimality certificates is via polynomial sums of squares (Lasserre's hierarchy), but it (a) can only handle functions that are polynomials and (b) scales poorly to the dimension of the problem due to computational bottlenecks in semidefinite programming.
GloptiNets tries to tackle the drawbacks of poly-SOS. The key observation is that, given a candidate solution, any positive function provides a lower bound, and hence a bound on the suboptimality. With this key observation, and considering the recent success of machine learning, the overall idea of GloptiNets is to seek a positive function parametrized by "neural networks" (more precisely the anchor points $z$ and the matrix $R$ in Definition 1.)
The rest of the paper mostly focuses on choosing the Kernel function and leveraging sampling (and concentration inequalities) to evaluate the loss function more efficiently (for which the reviewer did not check carefully).
The experiments of this paper compare GloptiNets with TSSOS, a state-of-the-art polynomial SOS toolbox, on computing certificates for random positive trigonometric polynomials. The experiments show that although TSSOS can obtain high-accuracy certificates, GloptiNets is much more scalable.
Strengths: - The paper tackles the important problem of nonconvex optimization with certificates.
- The proposed GloptiNets mitigates the drawbacks of existing poly-SOS frameworks, i.e., it can handle non-polynomial problems and it does not rely on semidefinite programming. As a result, it is more scalable.
Weaknesses: The biggest issue with this paper is that (i) its experiments are rather limited (I believe there are only 6 problems tested), and (ii) the limited experiments are not even related to applications where optimality certificates are desired. Therefore, it is questionable how useful this technique will be in practice (and it kind of defeats the purpose stated in the introduction).
I would encourage more experiments where the optimality certificates are indeed valuable (for example recently in [robotics](https://arxiv.org/abs/2109.03349)).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I notice that in the results table, the best suboptimality GloptiNets can produce is in the order of $10^{-3}$. I wonder if this can be improved by, for example
- increase the number of samples $x_1,\dots, x_N$,
- increase the size of the neural network (Fig. 2 partially answers this, but the accuracy there is still far from, say $10^{-9}$ as can be achieved by TSSOS)
- let the neural networks train for longer time
- increase the number of samples $\omega_1,\dots,\omega_n$
If it is possible to achieve the same certificate accuracy as TSSOS, then it is useful to report the number of sample points and training time required to attain the accuracy; if it is not possible, then an explanation of this should be provided (because the positive function class of GloptiNets should subsume the positive function class of TSSOS, and therefore the NN should be able to achieve the same accuracy as TSSOS if enough parameters are allowed?).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
## Use of GloptiNets in practice
Synthetic datasets offer the advantage of precise parameter control, including the function's norm and the number of coefficients for polynomials. This allows us to determine that the quality of GloptiNets certificates depends on the former and not the latter.
It would be very interesting to apply GloptiNets to the example the reviewer gives in robotics; however, their model uses specific functions which exhibit discontinuities: this may require a separate focused effort for efficient certification.
Moreover, while certifying polynomials is a natural application of GloptiNets, its capabilities extend beyond this specific function class. In our response to reviewers, we present new examples of successfully certifying kernel mixtures, for which GloptiNets is the only alternative we are aware of. Kernel mixtures, which represent functions learned by various kernel algorithms, are widely used in the machine learning domain, suggesting that GloptiNets could find substantial applicability there.
To be more specific on this last point, given some samples $(x_i, y_i)_{1 \leq i \leq n}$ from a black-box process, one can fit a model $h(x) = \sum\_{i=1}^n \alpha_i K(x_i, x)$, where $\alpha$ is given by *e.g.* $\alpha = (K + \lambda I)^{-1} y$ for Ridge regression. Then, GloptiNets can certify the minimum of this model, which acts as a certificate on the black-box process with some additional statistical assumptions.
Overall, our work serves as a proof of concept, laying the groundwork for future research to build upon and apply this framework to certify real-world applications.
## Tightness of the certificates
We give a detailed response in the general answer to all reviewers. In there, we detail the influence of all the parameters you mentioned. In a nutshell, increasing the number of samples $x_1, \dots, x_n$ and the training time will result in better optimization, hence lower estimation error; increasing the size of the network will lower the approximation error; finally, increasing the number of frequencies $\omega_1, \dots, \omega_n$ will result in lower variance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response.
The new experimental results are very interesting, and the new certificates obtained by L-BFGS are pretty good.
I raised my score to Accept. | Summary: This paper introduces a method for certain non-convex optimization problems on a hypercube or torus (i.e. with periodic boundary conditions), which also provides a certificate (a measure of how close the fitted function is to the target).
As I understand it, the certificate computation utilizes the Fourier basis.
They do two sets of experiments using their model GloptiNet: one on random trigonometric and one on Chebyschev polynomials, with increasing number of coefficients. They compare their method's performance against an existing polynomial solver, TSSOS. Their model's certificates are orders of magnitude less tight (~$10^{-2}$) than TSSOS ($10^{-12}$). But their model has the benefit that it takes almost constant time, independent of the complexity of the polynomial, to get within the certificate's accuracy. They also show that the certificate gets tighter as they increase the number of model parameters, though the drop seems slower than linear.
Strengths: The paper gives a detailed exposition of the derivation of the certificate and discusses existing work extensively. I am no expert in this field, but I could more or less follow the process.
The contribution of the work, in addition to providing a certificate, seems to lie in its efficiency. They claim it should work well in higher dimensions, too, and the run time seems to be not affected much by the complexity of the target function.
Also, being able to use fast stochastic optimizations of DNN while still providing a certificate for the fit seems to be another benefit of the method.
Since I am not familiar with this branch of optimization, I cannot assess the novelty that much.
Weaknesses: A big emphasis is on the derivation of the certificate. While I appreciate the details, it makes it difficult to follow the flow of the paper. Instead a little more details on how the extended k-SoS model is implemented and how the optimization is done would be useful. I see discussions of the details in the appendix, but it is brushed over in the main text. For example, is the R matrix optimized, or fixed?
Also, the high value of the certificate and lack of hyperparameter tuning, except model size, weakens the paper (see questions).
Additionally, the target function in the two experiment, trigonometric and Chebyshev, are both related to the Fourier series, which is used in computing the certificate. I wonder how well the model performs on other polynomials with other bases (see questions).
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. currently the certificate seems very loose at $O(10^{-2})$. Is this a good number?
2. How much does hyperparameter tuning (other than model size) affect the certificate? Is there hope for much better values, or does it need dramatic changes?
3. Experiments: How would the model perform on polynomials from a dramatically different basis from the Fourier basis (e.g. $x^p$)? I understand this doesn't work on the Torus (no periodicity), but does the hypercube have a similar limitation (meaning you *have to* use the Chebyshev basis due to boundary conditions)?
4. Baseline methods seem very limited. Is TSSOS the only viable alternative?
5. In Fig 2, no red curve is presented, but mentioned in the caption.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: They discuss the limitations. I would also add discussions about the class of polynomials, if it is currently limited to Chebyshev and trigonometric polynomials. As for societal impact, I don't see any issues, as this is a purely mathematical optimization paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Quality of the certificate
We refer the reviewer to the common response on that aspect.
## Implementation details
We would be happy to include implementation details in the main text, as a lot of attention was dedicated to having a structure which can scale to thousands of parameters.
For the optimization, we used gradient descent with momentum and a cosine scheduler. This was the most classical configuration we found in the DL literature. We performed the hyperparameter search on the type of regularization (proxy for the variance or not), the value of the regularization and the learning rate.
Finally, to answer precisely to your question, both the coefficients $R$ and the anchor points $\mathbf{z}$ are optimized during the training (`for` loop in Algorithm 1).
## Other polynomial basis
Simply put, for polynomial optimization there are no limitations on the basis. Indeed, if the polynomial $h$ is given in another real (resp. complex) basis, we can first perform a change of basis to the Chebychev (resp. Fourier) basis, which is a linear operation. The resulting polynomial would be handled natively by GloptiNets. More than that, the algorithm applies to the minimization of any function defined on a hypercube $C = [a_i, b_i]_{1 \leq i \leq n}$, for which we have access to the Fourier/Chebychev coefficients.
Here is a more detailed response:
**Hypercube constraints.**
Given a function $\tilde{h}$, what the algorithm requires is a set of constraints $C = [a_i, b_i]_{1 \leq i \leq n}$ to localize the minimum. Lassere's polynomial hierarchies do not necessarily need such constraint, but have much better convergence guarantees under similar constraints. For instance, this happens if $\lVert x \rVert^2 \leq R$ is in the constraint set, which is called the "Archimedean property" – this is encompassed in the hypercube constraint $C$ we consider. Thus, let's assume we minimize $h(x) = \tilde{h}(a_i + (x - a_i) / (b_i - a_i))$ for $x \in [0, 1]^d$.
**No loss of generality with periodic functions**.
Now, define $f(x) = h(\cos(2\pi u))$. This is a $1$-periodic function, **whose minimum is the same as $h$**. This is detailed in Remark 1, l. 109. This shows that working with trigonometric polynomials is done with no loss of generality.
**Better implementation with the Chebychev basis.**
One caveat on the transformation $f(x) = h(\cos(2\pi u))$ is that $f$ is an even function in all dimensions: GloptiNets with Fourier series would minimize $f$ on $(0, 1)^d$ whereas looking at $(0, ½)^d$ would be sufficient! Hopefully, we can overcome this bad dependency with the dimension by working directly with Chebychev series. They form an orthonormal basis of (non-periodic) functions defined on $(-1, 1)$. Assume we have access to the coefficients of $h$ on this basis, i.e. $h(x) = \sum h_\omega T_\omega(x)$ where $T_\omega$ is the Chebychev polynomial of degree $\omega$. Then, the key property we leverage is that $|T_\omega(x)| \leq 1$ on $(-1, 1)$. This implies that
$$
\lVert h \rVert_\infty \leq \sum_{\omega} |h_\omega| := \lVert h \rVert_F, ~ \text{which allows} ~ h_\star \geq c - \lVert h - c - g \rVert_F, ~~ \forall c \in \mathbb{R}, g \geq 0.
$$
This is exactly Eq. $(2)$ l.111, which allows for the same analysis as the one with the Fourier series. Note that we could have the same relation with the canonical basis you mentioned (i.e. $|x^p| \leq 1$ on $(-1, 1)$) but this would likely result in a big gap between the resulting $F$-norm and the $L_\infty$ norm.
To conclude, an algorithm which handle Fourier series would be enough. However, we obtain a more efficient implementation with the Chebychev basis, for which any polynomials can be converted to.
## Other solvers
We compared to TSSOS as it is the state of the art for fast (approximate) certification of polynomials. On top of that, it handles both complex and real polynomial hierarchies, with well-functioning implementation available.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: Thank you for clarifying the points. I guess for most problems of interest the bases you used are general enough. And thank you for the new experiments. The choice of optimizer seems to impact the certificate value significantly. | Summary: This paper develops an approach to compute certificates for non-convex optimization on functions that are optimized over the hypercube or torus. It does so by considering the generic certificate recipe from [7] and then relaxing the positive semi-definite constraint to a class of functions that is easier to optimize that they denote K-SoS. This class of functions depends on the choice of a kernel K, of which they explore different alternatives. The authors consider optimization aspects of this general framework and provide a concrete algorithm that they name GloptiNets. They also provide experiments comparing the tightness of the certificate across the number of parameters and related approaches.
Strengths: 1. The paper is clear and beautifully written. It is a pleasure to read.
2. Results are sound to the best of my knowledge. The theoretical derivations are clear and give the proper references to understand which steps have been taken.
3. The authors consider all relevant aspects of the implement of this framework: from expressiveness properties of the function approximation, to probabilistic estimates of the certificate, to practical implementation details.
Weaknesses:
No major weaknesses identified. Below are some minor suggestions that I hope the authors consider, but should not be taken as criticism towards the work.
## Minor suggestions
1. Add x and y labels to Figure 1.
2. Some aspects of this paper, such as the lower bound formula (2) or the anchor points Z are introduced assuming the reader knows about the related works. I would encourage the authors to aim for a larger audience. In particular, a few words about the intuition can go a long way towards making the work more accessible.
## Typos
L244: eventhough -> even though
L245 optimisation -> optimization (since American spelling is used elsewhere)
L93: "we focus on the computational performances of our model" it's unclear what the authors mean here by computational performances, is it run-time? tightness of the certificate? the trade-off between these two? Please make precise
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. What is Z_d in eq. (1)? I don't think it has been defined before.
2. It was not clear to me whether the anchor points Z are given by the problem or are part of the choice of the kernel family. Clarification would be appreciated.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I believe the limitations have been correctly addressed by the author in the Limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank this reviewer for the kind and encouraging feedback!
As you suggest, we will use additional space to provide additional assumption on the derivation of the certificate in Eq. (2). Typically, it relies on the relation
$$
f_\star = \sup_c c ~~ \text{ s.t. } ~~ f - c \geq 0 ~~~~~~~~ \text{(A)}
$$
which is a convex problem, albeit with a dense set of constraint. From there, a penalized version of $(A)$ is given with
$$
f_\star = \sup_{c, g \geq 0} c - \lVert f - c - g \rVert_\infty ~~~~~ (B)
$$
introduced in [1], which is still untractable (this actually highlights the fact that approximating $f$ is as hard as finding $f_\star$) but provides a general recipe for computing certificates, by taking any family of positive function $g$ (here, kernel-SoS models), and any surrogate for the $L_\infty$ norm (here, the F-norm). This yields Eq. $(2)$ in the paper. Adding an efficient way to evaluate it (as the $F$-norm is an infinite sum) and optimize it is the gist of our contribution.
## Questions
$\mathbb{Z}$ is the set of integers, which was indeed not introduced before.
Our positive function class are functions of the form $x \mapsto {\bf v}(x)^\top G {\bf v}(x)$, with $G \succeq 0$ and ${\bf v}(x) = (v_i(x))_{1 \leq i \leq n}$. We realized that optimizing the embedding ${\bf v}$ is beneficial for the certificate; that is why we take ${\bf v}$ of the form $v_i(x) = K(z_i, x)$. The setup we tried is $K$ set to the Bessel kernel (l. 217 or Eq. $(14)$) and $z_i \in \mathbb{T}^d$ the anchor point tuned during the optimization. In a kernel-learning vocabulary, this is learning the anchor points of a Nyström projection.
[1] Blake Woodworth, Francis Bach, and Alessandro Rudi. Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares. In Proceedings of Thirty Fifth Conference on Learning Theory, pages 4620–4642. PMLR, June 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarification answers. I keep my score unchanged. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and helpful feedback. Individually, we address each of their questions. Furthermore, we showcase new experiments demonstrating two key points: (1) the ability to obtain tighter certificates than the ones reported in the paper and (2) the versatility of our framework beyond polynomial optimization.
## Second-order optimization for tighter certificates
### Motivation
As pointed out by reviewers `RR26`, `rJQd` and `qa97` and detailed in Sec. 5 "Limitations", the certificates obtained by GloptiNets on **small polynomial** optimization task are not as tight as the ones obtained by polynomial solvers (Table 1, lines 1-2,4-5).
The certificates we compute are of the form `Fnorm_approx + var/√N`, where `N` is the number of frequencies sampled. The second term can be made negligible by choosing $N \gg 1$. The first term measures the quality of the approximation of $f - f_\star$ by out extended k-SoS model $g_\theta$, where $\theta = (R, \mathbf{z})$ as in Definition 1. More specifically, it is the sum of the approximation error and the estimation error. Given the fact that k-SoS models are universal approximators, the former will go to $0$ if we add more parameters to $g_\theta$. To make the latter smaller, we perform **approximate second-order optimization** in Algorithm 1 with L-BFGS.
### Results
The results for these experiments are available in **Table A** in the new material attached.
On the bigger model, using L-BFGS enables the bigger model (GN-big) to have a certificate about **one order of magnitude** tighter that the ones presented in the paper. The smaller model also benefits from this procedure, altough marginally. This shows that in the experiments reported in Table 1 in the original submission, GN-big suffered from a high estimation error.
To conclude, tighter certificates than the one presented in Table 1 are achievable by GloptiNets. This requires (1) sampling enough frequencies $N \gg 1$ *(to lower the variance term)*, (2) large enough models *(for the approximation error)*, and (3) good enough optimization *(for the estimation error)*. We detailed the first two points in our paper, but will definitely add the third in the discussion of the experiment section.
We thank the reviewers who motivated these new results.
## Optimizing Kernel mixture
### Motivation
Although we compared our approach to TSSOS for certifying polynomials, our model is not confined to this function class. This is evidenced by successful experiments on kernel mixtures, where our approach stands as the only viable alternative we are aware of.
In our submission, we carried out experiments with polynomials so that we could compare with existing solvers. The conclusion was that GloptiNets was no match to TSSOS when certifying small polynomials $f$ which exhibits some algebraic structure, but had a complexity independent of the number of coefficients of $f$. This enables GloptiNets to certify polynomials which are out of reach of TSSOS (Table 1, lines 3,5). Here, we go one step further and optimize a kernel mixture, which **has no polynomial structure**.
### Results
The results are showcased in **Figure A** in the document attached.
The function we certify are of the form $f(x) = \sum_{i=1}^n \alpha_i K(x_i, x)$, where $K$ is the Bessel kernel. Such function are ubiquitous in machine learning and arise *e.g.* when performing kernel ridge regression. They are characterized by their number of coefficients $n$ and their RKHS norm $\lVert f \rVert_{\mathcal{H}_s}^2$. Following Algorithm 1 as outlined in the paper, we use GloptiNets and plot the certificate function of the network size.
We can draw the same conclusion as seen in Figure 2 in the paper.
Firstly, GloptiNets' certificates depend solely on the function's norm, independent of its representation with the number of basis functions $K(x_i, x)$. Secondly, bigger networks provide tighter certificates.
Thus, even though the criticism that GloptiNets is not as tight as TSSOS is fair, it is only valid when certifying **small polynomials**. For the new showcased experiments, GloptiNets is the only alternative we are aware of.
## Update Fig. 1
We update Fig. 1 from the paper to add error bars on the random realizations and improve the rendering.
Pdf: /pdf/bb37064fdcf794eb808986f857d3db4600c944f8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: They present a novel approach to non-convex optimization with certificates, which handles smooth functions on the hypercube or on the torus. Unlike traditional methods that rely on algebraic properties, our algorithm exploits the regularity of the target function intrinsic in the decay of its Fourier spectrum.
Strengths: The paper is clearly written and the theory is interesting.
Weaknesses: The experimental results are not abundant.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: No.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We direct the reviewer's attention to the new experiments presented in the collective response to all reviewers. Should they have any additional concerns or questions, we would be happy to discuss them further.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed rebuttal.
Comment: Thanks for the detailed rebuttal. It solved my concerns. I will keep my score. | null | null | null | null | null | null |
State-space models with layer-wise nonlinearity are universal approximators with exponential decaying memory | Accept (poster) | Summary: The paper provides a constructive proof that state-space models (SSMs) are universal approximators of sequence-to-sequence mappings.
Moreover, it shows that SSMs (and even empirically S4) suffer from exponentially decaying memory just like standard RNNs.
Strengths: Up to my knowledge this is the first (constructive) proof that SSMs are universal. I find it in particular interesting that SSMs also suffer from exponentially decaying memory (given the provided definition of such).
Weaknesses: Regrettably, the paper exhibits a significant lack of quality in its writing, containing numerous grammatical errors and typos. It is imperative that it undergoes a comprehensive proofreading process to address these issues.
From my understanding, the paper proves universality for SSMs, which does not include universality for S4 (i.e., SSMs with specific structure of the matrices). This is a major limitation, as in practice no one uses simple SSMs, but only S4. Can S4 be included in the proof? I presume one has to show that also S4 can approximate element-wise functions up to any precision.
The section about the curse of memory is very hard to follow. Can you please rewrite it and properly introduce the concept of memory functions?
The paper is not rigorously written. For instance, it would be much more readable, if the authors would define every variable, e.g., $W \in \mathbb{R}^{m\times m}$ and so on. This would also help the authors avoid using variables that haven't been introduced before.
The provided full paper in the appendix and the main paper differ: for instance in equation 9: $\mathbf{H}$ depends on $t$ in the full paper, but not in the main paper.
Form what I understood, the universality proof of proposition 3.6 approximates functions, where the target value at every index depends on all elements in the full input sequence. However, this is not realistic, as mainly causal operators between sequences have to be learned, i.e., operators that are independent of the future values of the input sequence at every index. Can you please change the proof accordingly?
Technical Quality: 3 good
Clarity: 1 poor
Questions for Authors: See weaknesses
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 1 poor
Contribution: 3 good
Limitations: The paper provides an interesting and important proof that SSMs are universal, as well as that the memory functions decay exponentially. However, the proof does not include S4 models. Moreover, the quality of writing is very poor, which makes it hard to follow. Overall, the provided proof is very simple, which makes it appealing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your important questions regarding our proof, as well as your constructive suggestions for enhancing the writing quality. Below is our response to the points raised:
1. "Regrettably, the paper exhibits a significant lack of quality in its writing, containing numerous grammatical errors and typos. It is imperative that it undergoes a comprehensive proofreading process to address these issues."
- Thank you for bringing the writing concern to our attention. We have undertaken thorough proofreading and incorporated the necessary updates in the final revision.
2. "From my understanding, the paper proves universality for SSMs, which does not include universality for S4 (i.e., SSMs with specific structure of the matrices). This is a major limitation, as in practice no one uses simple SSMs, but only S4. Can S4 be included in the proof? I presume one has to show that also S4 can approximate element-wise functions up to any precision."
- We have prepared a table in the 'Response to all Reviewer' section to compare SSMs with linear RNNs, nonlinear RNNs, and S4. The primary differences between the vanilla state-space models (SSMs) and S4 involve model parameterisation, weight initialisation, discretisation, normalisation, dropout, and residual connections. However, with the exception of the residual connection, the model architectures of SSMs and S4 are almost identical, as both alternately stack linear RNNs and nonlinear activation layers. Therefore, in terms of universal approximation, the approximation capacities of both SSMs and S4 are equivalent.
3. "The section about the curse of memory is very hard to follow. Can you please rewrite it and properly introduce the concept of memory functions?"
- We have rewritten the section on the curse of memory to improve the introduction of the memory function, as well as the phenomenon of the curse of memory. Due to word constraints in the reply section, the revised fragments are presented in the 'Response to all Reviewer' under the section 'Definition of Memory Function and How to Evaluate It'.
4. "The paper is not rigorously written. For instance, it would be much more readable, if the authors would define every variable, e.g., $W \in \mathbb{R}^{m\times m}$ and so on. This would also help the authors avoid using variables that haven't been introduced before."
- Thank you for pointing this out. We have updated variable dimension in SSM single layer general form as follows: $x \in \mathbb{R}^{d_{in}}, y \in \mathbb{R}^{d_{out}}, h \in \mathbb{R}^m, W \in \mathbb{R}^{m \times m}, U \in \mathbb{R}^{m \times d_{in}}, b \in \mathbb{R}^m, C \in \mathbb{R}^{d_{out} \times m}, D \in \mathbb{R}^{d_{out} \times d_{in}}$.
5. "The provided full paper in the appendix and the main paper differ: for instance in equation 9: $\mathbf{H}$ depends on $t$ in the full paper, but not in the main paper."
- Thank you for the careful reading and pointing out the difference. We discovered the missing subscript $t$ and added it in the full paper.
6. "Form what I understood, the universality proof of proposition 3.6 approximates functions, where the target value at every index depends on all elements in the full input sequence. However, this is not realistic, as mainly causal operators between sequences have to be learned, i.e., operators that are independent of the future values of the input sequence at every index. Can you please change the proof accordingly?"
- Thank you so much for raising this important question. In proposition 3.6, we did not explicitly specify assumptions for the targets other than the **continuity** w.r.t. the inputs. Here we discuss several other important properties expected from the targets. In the context of proposition 3.9, we explore these properties as defined in paper [1] Definition 3.1.
1. **Causality**. Autoregressive problems such as language modelling focus on the cases that output $y_t$ only depend on the inputs $x_{0, \dots, t}$. The causality of the targets guarantees the temporal convolution kernels $\rho_k$ in Figure 4 having the property $\rho_k (t) = 0, t \leq 0.$ This causal temporal convolution, together with proposition 3.2, ensures that state-space model can be implemented in a recurrent form.
2. **Time-homogeneous**. In sequence modelling, the concept of being time-homogeneous is fundamental. This is because time-inhomogeneous targets may not be approximated to an arbitrary level of accuracy, regardless of how much data or computational resources are available.
3. **Bounded & Regular**. These two properties are usually assumed from the theoretical consideration. As shown in [1], continuous causal regular and time-homogeneous linear functionals have a convolution form based on the Riesz representation.
### **Limitations:**
7. "The paper provides an interesting and important proof that SSMs are universal, as well as that the memory functions decay exponentially. However, the proof does not include S4 models. Moreover, the quality of writing is very poor, which makes it hard to follow. Overall, the provided proof is very simple, which makes it appealing."
- As mentioned in response to the second comment, the primary differences between SSMs and S4 models lie in aspects other than the model architecture. Therefore, the universality proof for SSMs should also naturally extend to S4 models.
- We apologise for the current state of our manuscript's writing. We are dedicated to improving this aspect significantly in the final revision.
- We are glad to know that you found our proof's simplicity appealing. Our aim is to maintain rigour while ensuring comprehensibility.
---
[1] Zhong Li, Jiequn Han, E. Weinan, and Qianxiao Li. "On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis." In International Conference on Learning Representations. 2020.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed response
Comment: Thanks for the detailed response. The promised changes by the authors will increase the quality of the paper. I thus increase my rating. | Summary: This paper attempts to show several properties of SSMs: a) SSMs can approximate element-wise functions and temporal convolutions b) Universality of SSMs c) Exponential memory decay. The authors also show experiments of SSM's exponential memory decay and compare them against variants of RNN architectures.
Strengths: * Thorough background section and clear introduction of the RNN and SMM architecture.
* Clear writing.
* The usage of the Kolmogorov-Arnold thoerem and Volterra series is interesting.
Weaknesses: * My main concern is most of the main results seems trivial: When taking away the tricks that makes SSMs work such as smarter initialization and diagonal parameterisation, architecture wise, SSMs are simply an RNN without non-linearly between hidden states. Hence, the theory results (i.e. proposition 3.1, 3.2, 3.12 and theorem 3.13) seems to transfer very trivially from RNNs.
* The 2 formulations for showing universality of SMMs in section 3.2 seems to deviate greatly from the SSM architecture (i.e. figure 3 and 4 does not follow the definition of SSMs in equation 1 and 2). It is unclear how much the result transfers for standard SSMs used in practice.
* In figure 5 and 6 there are no error bars or mentioning of number of seeds.
Technical Quality: 2 fair
Clarity: 4 excellent
Questions for Authors: Authors may address my comments above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 4 excellent
Contribution: 1 poor
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your helpful comments for us to provide clarifications on the SSM architecture, its relation to RNNs, and therefore the significance of our findings. Below is our response to the points raised:
1. "My main concern is most of the main results seems trivial: When taking away the tricks that makes SSMs work such as smarter initialisation and diagonal parameterisation, architecture wise, SSMs are simply an RNN without non-linearly between hidden states. Hence, the theory results (i.e. proposition 3.1, 3.2, 3.12 and theorem 3.13) seems to transfer very trivially from RNNs."
- SSM is similar to RNN in the sense that it stacks linear RNN and layer-wise nonlinearity alternately. **However, SSMs are different from linear RNNs as linear RNNs are not universal approximators. While general nonlinear RNNs are universal, they introduce nonlinearity differently, leading to distinct dynamics between SSMs and nonlinear RNNs.** Therefore SSMs are different from linear RNNs and traditional nonlinear RNNs. In hindsight, the universal approximation and exponential memory decay might seem simple. However, these results shall contribute to the understanding of SSMs' excellent performance over various long-sequence modelling.
- Since proposition 3.1 is the direct application of universal approximation property of one hidden-layer neural network and proposition 3.2 is the inherent property of linear RNN, they might seem simple from the proofs. However, these two propositions are the indispensable building blocks to achieve the main results (proofs for universality of SSMs) in proposition 3.6 and proposition 3.9. The exponential memory decay results in proposition 3.12 and theorem 3.13 show that layer-wise nonlinearity does not change the qualitative memory pattern of recurrent models.
2. "The 2 formulations for showing universality of SSMs in section 3.2 seems to deviate greatly from the SSM architecture (i.e. figure 3 and 4 does not follow the definition of SSMs in equation 1 and 2). It is unclear how much the result transfers for standard SSMs used in practice."
- Equations 1 and 2 only illustrate the structure of single-layer state-space models. The constructions in Propositions 3.6 and 3.9 for multi-layer state-space models correspond to the structures demonstrated in Figures 3 and 4, respectively. **They show that by alternating between stacking simple linear RNNs and applying layer-wise nonlinearity, universality can be achieved.** **In practice, multi-layer state-space models are commonly used for various sequence modelling tasks (e.g., 2~32 layers in [1], 6 layers in [2], 4 layers in [3])**. Additionally, a single-layer state-space model without nonlinear activation is not universal and cannot be used to learn general nonlinear sequence-to-sequence relationships. Consequently, our theory characterises the standard SSM models prevalent in practical use.
3. "In figure 5 and 6 there are no error bars or mentioning of number of seeds."
- Thank you for bringing this oversight to our attention. Error bars and seed counts are indeed critical for interpreting experimental results. We have updated figures 5 and 6 to include error bars. The experiments are repeated 100 times to obtain the error bars.
---
[1] Albert Gu, Karan Goel, and Christopher Re. "Efficiently Modeling Long Sequences with Structured State Spaces." In _International Conference on Learning Representations_. 2021.
[2] Eric Nguyen, Karan Goel, Albert Gu, Gordon Downs, Preey Shah, Tri Dao, Stephen Baccus, and Christopher Ré. "S4nd: Modeling images and videos as multidimensional signals with state spaces." _Advances in neural information processing systems_ 35 (2022): 2846-2861.
[3] Albert Gu, Karan Goel, Ankit Gupta, and Christopher Ré. "On the parameterization and initialization of diagonal state space models." _Advances in Neural Information Processing Systems_ 35 (2022): 35971-35983.
---
Rebuttal Comment 1.1:
Title: Official Comment
Comment: Thank you authors, for your detailed response, I have adjusted the score accordingly | Summary: This paper analyzes properties of state space models (SSMs) and verifies the analytic results with numerical simulations.
The primary result is that SSMs have the same basic properties as classic RNNs. They perform temporal convolution on their inputs, with exponentially decaying memory and are universal function approximators.
Strengths: My overall assessment is that this paper is an important contribution to our understanding of SSMs. It raises really important questions about how SSMs are apparently able to learn long-range dependencies and points to the generality of exponentially-decaying memory.
The exposition was pretty clear; although the math is challenging for a broad audience, the simulations give confidence that the results are sound.
Weaknesses: It's worth thinking more about presentation of the figures. Figure 4 was pretty helpful but Figure 3 was pretty baffling. Generally speaking my suggestion is that the captions could be more explicit and all of the terms in the expressions should appear in the figures.
The paper raises a really important question that should be addressed more explicitly. If SSMs have basically the same properties as generic RNNs, why do they apparently work so much better? I appreciate that it may not be possible to answer this question with certainty, but some thoughtful discussion would really enhance the impact of this paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Why do SSMs work well empirically? Is it just that they more easily find long time constants in their exponential decay? I think it's really important to say something substantive about this.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 4 excellent
Limitations: Why do SSMs work well?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive review and insightful questions regarding the reasons behind the outstanding performance of SSMs. Below is our response to the points raised:
### **Weaknesses:**
1. "It's worth thinking more about presentation of the figures. Figure 4 was pretty helpful but Figure 3 was pretty baffling. Generally speaking my suggestion is that the captions could be more explicit and all of the terms in the expressions should appear in the figures."
- We greatly appreciate your helpful feedback regarding the presentation of our figures. We have enhanced Figure 3 by revising its captions and incorporating the necessary equations into the figure itself. Please let us know if this addresses your question or if you have any other feedback.
2. "The paper raises a really important question that should be addressed more explicitly. If SSMs have basically the same properties as generic RNNs, why do they apparently work so much better? I appreciate that it may not be possible to answer this question with certainty, but some thoughtful discussion would really enhance the impact of this paper."
- Since both of SSM and RNN have universal approximation capabilities, their difference in performance mainly stems from the training process. Unlike traditional RNNs with recurrent nonlinearity, the linear RNN structure of SSM significantly reduces the time cost of forward and backward operations, scaling it down from $O(T)$ to $O(\log T)$. While RNNs such as [4] used a sequence length of 100, the fast training speed of SSM helps handle longer input sequences. Besides training speed, there could also be some other underlying reasons that require further ablation studies to pinpoint.
### **Questions:**
3. "Why do SSMs work well empirically? Is it just that they more easily find long time constants in their exponential decay? I think it's really important to say something substantive about this."
- Thank you for the inspiring question. We will incorporate the following discussion into the revision.
- In response to the inquiry on why SSMs (or specifically S4) perform well empirically, there are several dimensions to consider. Primarily, both SSMs and S4s possess a universal approximation property that forms the theoretical foundation of their numerical performance. Secondly, the parameterisation of the recurrent matrix and the time discretisation in S4 maintains stability as elucidated in [1]. This stability ensures the model's capability in approximating long-memory targets, even when the model's memory undergoes exponential decay. Thirdly, SSMs benefit from the parallelisation capabilities of linear RNNs through techniques like the fast Fourier transform or associative scan, which scale the time complexity down from $O(T)$ to $O(\log T)$. Moreover, the weight initialisation method illustrated in [2,3] further augments the model's capacity to learn long-term memory. Lastly, the success of S4 can be attributed to the integration of widely used techniques such as layer normalisation, dropout, and residual connections.
### **Limitations:**
4. "Why do SSMs work well?"
- In response to the query regarding the efficacy of SSMs, we have delved into an analysis encompassing various aspects such as universality, parameterisation, training methodology, initialisation, and other prevalent techniques, as elucidated in our reply to Question 3. While we acknowledge that our insights might not offer a comprehensive explanation, we are optimistic that our analysis provides a valuable contribution towards the community's evolving understanding of the model.
---
[1] Shida Wang, Zhong Li, and Qianxiao Li. "Inverse Approximation Theory for Nonlinear Recurrent Neural Networks." _arXiv preprint arXiv:2305.19190_ (2023).
[2] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. "Hippo: Recurrent memory with optimal polynomial projections." _Advances in neural information processing systems_ 33 (2020): 1474-1487.
[3] Albert Gu, Karan Goel, and Christopher Re. "Efficiently Modeling Long Sequences with Structured State Spaces." In _International Conference on Learning Representations_. 2021
[4] Jack Rae, Chris Dyer, Peter Dayan, and Timothy Lillicrap. "Fast parametric learning with activation memorization." In _International Conference on Machine Learning_, pp. 4228-4237. PMLR, 2018.
---
Rebuttal 2:
Comment: Thank you for the thoughtful response. | Summary: The authors set out to analyze state space models, which have been gaining popularity as alternatives to transformer based systems that can better model long range dependencies and are more computationally efficient. Since such models do not utilize a non-linear activation function along the temporal access, it is important to analyze if this poses a restriction to their modeling capabilities. The paper provides a construction-based argument to prove that as long as there are layer-wise nonlinearities in the model, it becomes a universal approximator for any sequence model and thus does not require activation functions in the temporal domain to improve model capacity. Further, the authors also show that like the earlier recurrent networks (RNN, GRU, etc.), these models also suffer from an exponentially decaying memory.
While the paper tackles quite a relevant topic on a family of models that are becoming widely popular, the draft itself could use some work. In particular, the presentation and clarity of the work needs to be improved, and it would be very helpful to also get some intuition behind the propositions mentioned and proved. That is, to understand what the propositions imply and why they are important.
Strengths: - The authors show that the state space models (SSMs) which have no non-linearity in their temporal axis can still universally approximate any sequential mapping up to some arbitrary error. This is proved by construction.
- The authors also show that like RNNs, such models also suffer from the exponential memory decay problem.
- Some preliminary numerical experiments are conducted to test the memory decay problem across different recurrent models.
Weaknesses: - The writing of the draft can be substantially improved. It would be good to include some explanations into the implications of each proposition and theorem, as to understand why exactly it is meaningful and important. The plots are currently taking a lot of extra space which can be freed to add this content.
- It would be nice if the authors could provide some kind of rates of memory decay to better understand if it is better or worse in SSMs than other RNN-based methodologies.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - Line 48: Why is this relationship nonlinear? It looks like a linear function of $x$.
- Line 83: Does the definition of $\hat{\rho}_t$ involve $t$ in the exponential or $(t-s)$ which is written in the draft (mistakingly, I believe?)
- Could the authors provide some clarification and their reasoning on their construction of the memory function? Why is this reasonable? How is this computed?
- For universality results in Section 3.3, could the authors provide some intuition behind Remark 3.4, what it implies and how is it achieved?
- Is there a difference in some assumptions between Kolmogorov-theorem based construction with Volterra-Series based construction? Is the latter always superior? What cost does it come with?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback on providing more explanation and intuition behind the mathematical proof. Below is our response to the points raised:
1. "The writing of the draft can be substantially improved. It would be good to include some explanations into the implications of each proposition and theorem, as to understand why exactly it is meaningful and important. The plots are currently taking a lot of extra space which can be freed to add this content."
- Thank you for the suggestions on the paper writing. We have incorporated the following implications into the final revision:
- The implications of Propositions 3.1 and 3.2 are modest yet vital: Proposition 3.1 works on the approximation of element-wise (or layer-wise) nonlinear functions, while Proposition 3.2 focuses on temporal convolution. The findings demonstrate that these two types of operators can indeed be approximated using state-space models. While the proof approach may appear elementary, it plays a pivotal role, laying the groundwork for the more advanced findings presented in Propositions 3.6 and 3.9.
- Implications of Theorem 3.5 and Proposition 3.6: The Kolmogorov-Arnold theorem shows the feasibility to **decompose any continuous multi-variable function into single-variable functions and addition operators.** In state-space model, the target causal time-homogeneous sequential relationship can be viewed as a multi-variable function where each input is a variable. Since linear RNN provides the addition across different inputs, the element-wise nonlinearity corresponds to the single-variable function.
- Implications of Theorem 3.8 and Proposition 3.9: The Volterra Series approach is a more deep-learning flavour construction while the Kolmogorov-Arnold theorem provides a classical finite-layer approximation result. See the answer to question 7 on the comparison of two approaches.
- Implications of Proposition 3.11 and Proposition 3.12: Despite the state-space model's proficiency in managing numerous long-sequence tasks, including those within the long-range arena, it exhibits an inherent exponential memory decay. **Our findings, however, suggest that the smart initialisation from S4 can moderate this issue, resulting in a slower memory decay function.**
2. "It would be nice if the authors could provide some kind of rates of memory decay to better understand if it is better or worse in SSMs than other RNN-based methodologies."
- Thank you for your valuable suggestion. We acknowledge the importance of understanding how memory decay in SSMs compared to other RNN. However, according to the survey paper [1], there is limited work on the rate of memory rate for different SSMs. Direct comparisons of these rates could be complex, given that they might hinge on multiple factors, including the number of parameters. We consider this as a topic for future exploration.
### **Questions:**
3. "Line 48: Why is this relationship nonlinear? It looks like a linear function of $x$."
- Yes, it is a typo. Thank you for pointing this out. We have modified it into `the first component` in the revision.
4. "Line 83: Does the definition of $\hat{\rho}_t$ involve $t$ in the exponential or $(t-s)$ which is written in the draft (mistakingly, I believe?)"
- Thank you so much for pointing out the mistake in the notation, it should be $\hat{\rho}_{t-s}$. We have modified it in the revision of the paper.
5. "Could the authors provide some clarification and their reasoning on their construction of the memory function? Why is this reasonable? How is this computed?"
- Due to word limitations, we have moved the reply to this question to the "Response to all Reviewer" section, specifically under the topic "Definition of Memory Function and How to Evaluate it".
6. "For universality results in Section 3.3, could the authors provide some intuition behind Remark 3.4, what it implies and how is it achieved?"
- We first clarify a typo in the Remark 3.4: `we use the same` $\Phi$. The first Kolmogorov theorem is proved for different $\Phi_q$ and $\phi_{q, p}$. Subsequent studies refine this process, minimising the variety of functions required to achieve the final form of Equation (16). This refinement can be interpreted as an application of **weight sharing** in the approximation of functions $\Phi$ and $\phi$. Essentially, the practice of weight sharing in SSMs is substantiated by the diverse results cited in Remark 3.4.
7. "Is there a difference in some assumptions between Kolmogorov-theorem based construction with Volterra-Series based construction? Is the latter always superior? What cost does it come with?"
- Thank you so much for your very thoughtful and inspiring question. There is no direct difference in terms of assumptions between Kolmogorov-theorem based construction and Volterra-Series based construction. Both constructions work for causal regular time-homogeneous nonlinear functionals, in other words, sequence relationships.
- Please refer to "Response to all Reviewers" , under "Highlights of Other Modifications" section, point number 2 for a detailed comparison between the two proof methods.
- Certainly, the Volterra-Series based construction comes with a cost similar to that of deep learning. Its complex structure makes the analysis of the model dynamics difficult to analyse.
---
[1] Haotian Jiang, Qianxiao Li, Zhong Li & Shida Wang. (2023). A Brief Survey on the Approximation Theory for Sequence Modelling. _Journal of Machine Learning_. _2_ (1). 1-30.
---
Rebuttal Comment 1.1:
Title: Official Comment
Comment: Thanks for the response as well as the clarifications provided. In light of this, I am inclined to raise my score. | Rebuttal 1:
Rebuttal: ## Response to all Reviewers
We thank all the reviewers for their insightful reviews. The reviewers thought the work tackles **"quite a relevant topic on a family of models that are becoming widely popular"**, provides result that **"is an important contribution to our understanding of SSMs"**, with **"an interesting and important proof"**. The experiments demonstrate a memory decay phenomenon that is **”particular interesting”** and **”give confidence that the results are sound”**.
Below we present a summary of our response and the corresponding major changes.
### Comparisons Between RNNs, SSMs and S4
**Memory decay**: All the recurrent models in the table suffer from exponential decay in memory. However, S4 has a slower decay (as shown in experiments) with a suitable initialisation.
| | Linear RNN| Nonlinear RNN| SSMs| S4|
|---|---|---|---|---|
| Recurrence| Yes| Yes| Yes| Yes|
| Universality| No| Yes (hardtanh,tanh) [1]| Yes (ours) | Yes (ours)|
| Temporal Parallel| Yes| No | Yes| Yes|
| Exponential decay| Yes, [2]| Yes, [3]| Yes (ours) | Yes, moderated (ours)|
### Definition of Memory Function and How to Evaluate It
- As a motivation for the memory function, [2] proves that a bounded causal continuous regular time-homogeneous linear functional has the following Riesz representation: $y_t = \mathbf{H_\mathrm{t}} (\mathbf{x}) = \int_{-\infty}^t \rho_{t-s} x_sds.$ Here $\rho_t$ is an $L_1$ integrable function. If $\rho_t$ rapidly decreases with $t$, then the target sequence map $\mathbf{H}_t$ has a short-term memory. Consequently, we refer to $\rho_t$ as the memory function, since it captures the memory property of a linear functional in its entirety.
- For consistency with the memory function $\rho$ in linear functional analysis, we substitute the original impulse test input $(1, 0, 0, \dots)$ with the Heaviside inputs denoted by $\mathbf{x}^{\textrm{test}}$ where $\mathbf{x}^{\textrm{test}}(t) = x,$ if $t > 0$, and $\mathbf{x}^{\textrm{test}}(t) = 0,$ if $t \leq 0$.
- Notice that the derivative of a linear functional at test input extracts the memory function $|\frac{d}{dt} H_t( \mathbf{x}^{\textrm{test}} ) | = |\rho(t)|_2$. Therefore a natural extension of the memory function $\rho_t$ to the nonlinear functionals is given in Section 2.3.
$\displaystyle \hat{\rho}(t) = |\frac{d \hat y_t}{dt}|_2$ where $\hat{y}_t = \widehat{\mathbf{H}}_t ({\mathbf{x}^{\textrm{test}}}).$
- Memory function can be evaluated by computing the model's derivative at the test input using finite difference method.
### Highlights of Other Modifications
1. We added discussion on **possible reasons for the good performance of SSMs**:
Primarily, both SSMs and S4s possess a universal approximation property that forms the theoretical foundation of their numerical performance. Secondly, the parameterisation of the recurrent matrix and the time discretisation in S4 maintains stability as elucidated in [3]. This stability ensures the model's capability in approximating long-memory targets, even when the model's memory undergoes exponential decay. Thirdly, SSMs benefit from the parallelisation capabilities of linear RNNs through techniques like the fast Fourier transform or associative scan, which scale the time complexity down from $O(T)$ to $O(\log T)$. Moreover, the weight initialisation method illustrated in [4,5] further augments the model's capacity to learn long-term memory. Lastly, the success of S4 can be attributed to the integration of widely used techniques such as layer normalisation, dropout, and residual connections.
2. We provided comparison between the two proof methods:
**An analogy between Kolmogorov-theorem and classical fully-connected neural networks can be drawn because of the finite number of layers in both.** It is shown in [6] that a simple function expressible by a small 3-layer feedforward neural networks cannot be approximated to a certain accuracy unless the network's width is exponential in the dimension. **In contrast, Volterra-Series shares similarities with deep learning**, and as a result, the advantages of deep learning over classical fully-connected neural networks carry over to this approach. **The authors are inclined to consider Volterra-Series based construction as relatively superior, and increasing the depth to be a more efficient way to scale up the model.**
3. Dimension of variables are attached to the paper, typos and grammar errors are resolved.
---
[1] Lukas Gonon and Juan-Pablo Ortega. "Reservoir computing universality with stochastic inputs." IEEE transactions on neural networks and learning systems 31, no. 1 (2019): 100-112.
[2] Zhong Li, Jiequn Han, E. Weinan, and Qianxiao Li. "On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis." In International Conference on Learning Representations. 2020.
[3] Shida Wang, Zhong Li, and Qianxiao Li. "Inverse Approximation Theory for Nonlinear Recurrent Neural Networks." _arXiv preprint arXiv:2305.19190_ (2023).
[4] Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. "Hippo: Recurrent memory with optimal polynomial projections." _Advances in neural information processing systems_ 33 (2020): 1474-1487.
[5] Albert Gu, Karan Goel, and Christopher Re. "Efficiently Modeling Long Sequences with Structured State Spaces." In _International Conference on Learning Representations_. 2021
[6] Ronen Eldan and Ohad Shamir. "The power of depth for feedforward neural networks." In Conference on learning theory, pp. 907-940. PMLR, 2016.
Pdf: /pdf/5e93451358314aac94623c6e0a147ee2dd157d58.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper presents universal approximation results for state-space models (SSM) with layer-wise nonlinearity. In particular, two-layer SSMs with layer-wise nonlinearity can approximate any continuous function over a compact set. Moreover, SSMs can approximate elementwise function and convolution. The paper also characterizes the memory behavior of SSMs, showing that their memory still decays exponentially when the eigenvalues of the A matrix is bounded from 1. These theoretical results are validated with numerical experiments.
Strengths: 1. Universal approximation results are helpful to theoretically characterize the expressive power of SSMs. These models are becoming more popular and these results are not yet known to the best of my knowledge. Consequently the paper could help better understand these models.
2. The presentation is quite clear. Even though this paper is theory-heavy, the main proof ideas are summarized in the main paper. This helps readers understand the technical challenges and the idea of the approach.
Weaknesses: 1. Lacking connection to linear RNNs. SSMs are a special case of linear RNNs, and universal approximation results of linear RNNs are available. Moreover, one can show that almost any linear RNNs can be written as SSMs (e.g. Sec 2.2 of Gupta et al. 2022). This is simply because almost any matrix A can be diagonalized, and put into RNNs. Could the university approximation results of linear RNNs be used to prove university approximation results of SSM through this connection?
2. Lack of applications, or a direction to help with applications. While the theoretical results could help with the understanding of these models, the paper does not point out potentially applications of these theoretical results. This might limit the significance of these results.
[1] Simplifying and Understanding State Space Models with Diagonal Linear RNNs. Ankit Gupta, Harsh Mehta, Jonathan Berant. 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. What are the dimensions of input/output of the SSMs? The notation is unclear on this.
2. Eq (9): What's H(x_test). Why is rho_hat defined this way?
More generally I don't understand this section.
3. Why does the paper focus on approximation results for elementwise function and convolution?
Elementwise function has no recurrent so it's all about the layer-wise non-linearity?
4. I don't understand Prop 3.12. Is it the same constant c0?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Not necessary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive comments for us to clarify details in our work. Below is our response to the points raised:
### **Weaknesses:**
1. "Lacking connection to linear RNNs. SSMs are a special case of linear RNNs, and universal approximation results of linear RNNs are available. Moreover, one can show that almost any linear RNNs can be written as SSMs (e.g. Sec 2.2 of Gupta et al. 2022). This is simply because almost any matrix A can be diagonalised, and put into RNNs. Could the university approximation results of linear RNNs be used to prove university approximation results of SSM through this connection?"
- Thank you for your suggestion. We have attached a table comparing linear RNNs, nonlinear RNNs, state-space models, S4 in detail in 'Response to all Reviewer' section.
- Sec 2.2 of Gupta et al. 2022 claims that `Diagonal Linear RNNs (DLRs) are as expressive as general linear RNNs`. Indeed we may consider diagonal case of SSMs, but it is important to note that the result in Gupta et al. 2022 does not contribute to the universality of linear RNN, and therefore is not directly relevant with the proof of SSMs universality. Moreover, there is more to SSMs than linear RNN that we have taken into consideration, such as the unique architecture of alternating linear and non-linear layers. Therefore, the connection between the two might not be as strong unfortunately.
2. "Lack of applications, or a direction to help with applications. While the theoretical results could help with the understanding of these models, the paper does not point out potentially applications of these theoretical results. This might limit the significance of these results."
- While universal approximation theorem doesn't guarantee the ease of learning or the speed of convergence in training a network, it does provide a theoretical underpinning for understanding why, given enough data and computational power, neural networks are capable of excellent performance.
- While it is straightforward that linear RNN can only approximate linear functionals, we proved that state-space models (linear RNN with layer-wise nonlinearity) has the universal approximation property. This property is a theoretical guarantee for various applications. In practice, State Space Models (SSMs) demonstrate performance comparable to transformer models (see [1] [2]). Practitioners often question whether a model's success on one task extends to new tasks. **Our findings support the viability of using SSMs for these new tasks.**
### **Questions:**
1. "What are the dimensions of input/output of the SSMs? The notation is unclear on this."
- Thank you for pointing this out. We have updated variable dimension in SSM single layer general form as follows: $x \in \mathbb{R}^{d_{in}}, y \in \mathbb{R}^{d_{out}}, h \in \mathbb{R}^m, W \in \mathbb{R}^{m \times m}, U \in \mathbb{R}^{m \times d_{in}}, b \in \mathbb{R}^m, C \in \mathbb{R}^{d_{out} \times m}, D \in \mathbb{R}^{d_{out} \times d_{in}}$. For multi-layer state-space model, the nonlinear activation is added layer-wise as shown in Figure 1.
2. "Eq (9): What's H(x_test). Why is rho_hat defined this way? More generally I don't understand this section."
- Thank you for pointing out the typo in the paper. We have modified this in the full paper version in supplementary material as $\hat{y}_t = \mathbf{H}_t (\mathbf{x}^{\textrm{test}})$.
- For consistency with the memory function $\rho$ in linear functional analysis, we substitute the test input with the Heaviside inputs denoted by $\mathbf{x}^{\textrm{test}}$ where $\mathbf{x}^{\textrm{test}}(t) = x,$ if $t > 0$, and $\mathbf{x}^{\textrm{test}}(t) = 0,$ if $t \leq 0$.
- Due to the word limitations in individual responses, we have moved our reply to this question to the "Response to all Reviewer" section, specifically under the topic "Definition of Memory Function and How to Evaluate It."
3. Why does the paper focus on approximation results for elementwise function and convolution? Elementwise function has no recurrent so it's all about the layer-wise non-linearity?
- In our paper, the main result of universal approximation theorem is established in section 3.3 based on Kolmogorov-Arnold representation theorem and Volterra Series. In particular, our proof relies heavily on element-wise nonlinear functions and temporal convolutions - these serve as the primary building blocks in our theoretical structure. Therefore, in the section 3.1 and 3.2, we first establish the feasibility of approximating element-wise nonlinear functions and temporal convolutions with state-space models. While it is true that element-wise functions lack recurrence, they provide nonlinearity without breaking the recurrence. It is important to highlight that the nonlinearity between layers alone is sufficient to achieve universality of state-space models. For example, it can approximate nonlinear RNNs with nonlinear recurrent activations. We are happy to elaborate further on any point.
4. I don't understand Prop 3.12. Is it the same constant c0?
- As is shown in appendix B.7, it is the same constant $c_0$.
---
[1] Albert Gu, Karan Goel, and Christopher Re. "Efficiently Modeling Long Sequences with Structured State Spaces." In International Conference on Learning Representations. 2021.
[2] Jimmy TH Smith, Andrew Warrington, and Scott Linderman. "Simplified State Space Layers for Sequence Modeling." In The Eleventh International Conference on Learning Representations. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the explanation. I remain supportive of the paper. | null | null | null | null | null | null |
Learning to Influence Human Behavior with Offline Reinforcement Learning | Accept (poster) | Summary: This paper presents an investigation for learning to influence suboptimal human opponents in agent-human interactions. It claims that by performing offline RL on human-human interaction data, an agent can learn to influence its opponent's actions and latent strategies, even if there is no explicit information of influence in the data. In the first set of experiments, the authors showed that CQL achieved higher test returns when paring with human opponents, and in the second set of experiments, they showed that decoding latent representation for states performed better than two other opponent modeling technique.
Strengths: 1. The paper is well-written. Materials are organized in a way that facilitates reading, and the connection to existing literature is clearly stated.
2. Learning to influence human opponents, especially those suboptimal ones, is an important problem for human-agent interactions.
3. The idea of learning influence strategies from human-human data is interesting and intuitive.
Weaknesses: 1. There lacks explicit empirical evidence to support the claims made in this paper.
The authors claimed that: (a) agents can learn to influence their opponents' actions using offline RL (the CQL algorithm), and (b) agents can learn to influence their opponents' latent strategies. However, there is no direct quantitative evidence of the increase in such influence. The reported improvement in test return can be explained by many factors. For example, rather than influencing its opponent, an agent may proactively solve the task itself. The only evidence is four states of a trajectory, yet it is not sufficiently annotated for readers.
2. The presentation for modeling latent strategies is confusing.
This paper does not offer a tangible description of the "latent strategies". From eq (1), the "latent strategies" seem to be vectors that improve the prediction of the next states. But eq (1) seems not characterize the relationship between these vectors and the actions of the opponent. So I agree that they are latent vectors that improve policy learning, but I cannot see why they are strategies of human opponents.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: 1. Since this paper considers a multi-agent setting, I wonder how the MDP formulation in sec 3 covers the information of the opponents. In particular, do you consider the opponent's actions as part of the states or part of the actions?
2. Improvements in test return can be explained by multiple factors. To explicitly show that offline RL shapes human opponents' actions or strategies, more explicit evidence is necessary. For example, what about reporting the action frequencies of humans?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes, they have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. The main concerns in your review seem to center around the evidence to support our conclusions. We hope to remedy this by providing additional evaluations that further support our hypothesis that the human does improve their behavior during evaluation (implying successful influence) and that the learned agent is not simply more adept at solving the task independently. We go into these additions in more detail below, and are happy to compute any others that you may suggest. Please let us know if these additions fully address the issues in your review.
**No direct quantitative evidence of influence**
You raised a good point that the improved performance may not be due to influence. To remedy this, we report the following additional metrics on the evaluation runs:
1. In addition to showing the cumulative reward obtained, we will show the difference in cumulative reward obtained between the first and last 100 timesteps of each evaluation. As shown in Figure 1 (of the attached file), the reward obtained towards the end of evaluation is much higher than in the beginning. Since the learned agent only changes behavior when the human partner does, this means that the human must have adapted their behavior over the course of evaluation.
2. To rule out that the learned agent is not simply “solv[ing] the task itself”, we evaluate all the considered approaches against a scripted policy that always performs the suboptimal action we would expect from a naive human (such as grabbing onions when tomatoes yield higher reward). In Figure 2, we see that the learned agent using our approach actually performs worse than naive offline RL. This means that our learned agent cannot be solving the task more successfully independently, but rather is trying to influence its human partner.
If you have ideas for other metrics that we can compute to support our conclusion further, we are happy to include them in the updated paper.
**Characterize the relationship between latent strategies and the actions of the opponent**
We agree that this would be informative. We attempt to do so in Figure 3, where we plot several latent strategies inferred from trajectories during evaluation (using PCA to reduce to 2 dimensions). We then colored the points that correspond to trajectories where the human partner goes to get tomatoes as red, and ones where the human goes to get onions as green. The plot shows that the red and green points are prominently in different clusters, showing that the latent strategies correspond to different high-level actions taken by the human.
**Answers to questions**
You also raised questions that we aim to answer below:
1. In this work, we consider the partner’s actions as part of the state, since they are observed by the learning agent. The partner’s intentions/strategy, however, are unobserved, making our MDP a special case of a POMDP. We will clarify this in the updated paper.
2. We agree that more evidence is necessary. We propose adding the following metrics described above, and shown in the attached file. We are happy to include any other metrics you might be helpful to support our conclusion.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I have adjusted my evaluations accordingly. | Summary: This paper presents an investigation into the application of offline RL on a dataset of human-human interactions to develop a policy capable of influencing human behaviors towards desired outcomes. The authors demonstrate that the learned policy not only affects immediate behavior but also influences the long-term strategies and preferences of the human partner. Experimental results provide strong evidence for the effectiveness of offline RL in learning to influence humans and achieving desirable rewards.
Strengths: 1. This paper is well-written and easy to follow.
2. The utilization of offline RL on an existing dataset of human interactions is a straightforward yet effective approach.
3. The problem of learning to influence human behaviors from existing datasets is both interesting and important for the field
Weaknesses: 1. The experiment solely employs offline RL methods as baselines. It would be beneficial to compare offline RL methods with approaches that leverage both offline data and online interactions. For example, one potential baseline could involve first acquiring a learned human model via behavior cloning and subsequently learning an adaptive policy to cooperate with the learned human model in the simulator, as demonstrated in [1].
2. The evaluation appears to be limited to assessments involving human participants. It would be valuable to evaluate the learned agent against a learned human model or scripted policies, as demonstrated in [1] [2].
3. The claim that there is no evidence of influence between humans, coupled with the assertion that the learned policy can induce long-term changes in human strategy, may be overly conclusive and requires further substantiation.
Reference:
1. Strouse, D. J., et al. "Collaborating with humans without human data." Advances in Neural Information Processing Systems 34 (2021): 14502-14515.
2. Yu, Chao, et al. "Learning Zero-Shot Cooperation with Humans, Assuming Humans Are Biased. ICLR 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. The authors state that the dataset contains no evidence of influence. However, how can one differentiate between behaviors resulting from mistakes and those intentionally performed by the human player to influence their partner?
2. The data collection process outlined in Section 5 involves various approaches. What impact does the dataset have on the learned policy? Would the learned policy differ significantly if only the first or second set of instructions were used for training?
3. Section 6 claims that the learned policy can induce long-term influence on the latent strategies of the human partner. Besides the average episode reward, is there any additional evidence supporting the assertion that the learned policy genuinely influences humans in the long term? The case study presented in Figure 6 and Lines 390-393 demonstrates short-term behavior influence, where the agent blocks humans from picking up onions to encourage tomato selection. My question is whether this influence is truly long-term, meaning, would the human player exhibit a stronger preference for picking tomatoes after repeated instances of being blocked from picking onions by the learned agent?
Clarity issues:
1. In Line 208, the term "naive offline RL" is not immediately clear. Please provide a more precise explanation.
2. Regarding Figure 5 and Figure 6, it would be better to mark which chef is controlled by the human player and which chef is controlled by the learned policy.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: This work currently focuses on relatively simple environments. It would be promising to scale the method to more complicated systems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. You raised some interesting additional evaluations to strengthen our paper that we have added and discussed below. Please let us know if our response fully addresses the issues in your review.
**Compare with approaches that leverage both offline data and online interactions**
In this work, we considered methods that were purely offline (can only use human-human data) and did not have access to an environment or human simulator. We believe that this limiting setting is important to study to find approaches that work in complex, real-world settings.
However, we agree that many existing approaches in the broader domain of human-AI collaboration use self-play, thus circumventing requiring a human simulator (though they still require a simulator of the environment/game, which is often easier to obtain). We believe that self-play approaches fail when the human partner is strategically suboptimal, and the learned agent can account for that by influencing the partner to change their strategy. Though Yu et al. consider biased humans, they do not consider that human behavior can change within a trajectory, thus not allowing for influence like we do. In Figure 4 (of the attached file), we reported results for an agent learned via self-play (FCP), and see that it does not perform as well as ours when the human has a misaligned objective.
**Evaluate the learned agent against a learned human model or scripted policies**
We believe that it is important to evaluate our work against real humans because they exhibit nuances that traditional simulators typically do not emulate. Namely, real humans are able to adapt their behavior, which allows for such behavior to be influenced by changes in state or the actions of other agents. However, we see that evaluating against simulated, stationary policies can still be useful to show that our approach actually works due to influence. To rule out that our improved performance is simply due to solving the task better independently, we evaluate all the considered approaches against a scripted policy that always performs the suboptimal action (such as grabbing onions when tomatoes yield higher reward). In Figure 2, we see that the learned agent using our approach actually performs worse than naive offline RL. This means that our learned agent cannot be solving the task more successfully independently, but rather is trying to influence its human partner.
**Claims require further substantiation**
For the claim that there is no evidence of influence between humans, we are not referring to influence of any kind, but specifically influence that assists in solving the new generalization task. Specifically, in the new task where one of the humans receives higher reward for plating, one form of influence (that offline RL learns) is to pass the plate to the human to encourage them to plate. We can confirm that this influence strategy does not appear in the offline data at all by simply inspecting all the trajectories (as there are only 25).
For the claim that there is long term influence, we provide the following quantitative evidence. We show the difference in cumulative reward obtained between the first and last 100 timesteps of each evaluation. As shown in Figure 1, the reward obtained towards the end of evaluation is much higher than in the beginning. Since the learned agent only changes behavior when the human partner does, this means that the human must have adapted their behavior over the course of evaluation i.e. the human knows to pick tomatoes over onions.
**Answers to questions**
The reviewer also raised questions that we aim to answer below:
1. We hope to have clarified what we meant by no evidence of influence above. Regarding accidental vs intentional influence, we believe that both would result in the same actions being observed in the data i.e. passing a plate looks the same even if done accidentally. Because we did not observe such actions in the dataset, we can confidently say that no accidental or intentional influence appeared.
2. This is an important point, as offline RL cannot come up with entirely new strategies from scratch. Our hypothesis is that the dataset must demonstrate a reasonable range of behaviors such that such behaviors can be stitched to produce a new strategy. In the example of passing a plate, offline RL cannot learn to do this if there is no movement of objects at all in the dataset, but can if there is evidence of humans moving to and picking up objects from shared tables. We ensured that our dataset had these components.
3. We hope that our additional evaluation shows evidence of long-term influence. If you have others in mind, we would be happy to compute and include them in the updated paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I would maintain my original scores. | Summary: This work aims to provide some evidence that offline reinforcement learning techniques can be used in the field of human-agent collaboration to influence or improve the behavior and underlying strategies of humans. The authors first verified that agents trained by CQL can influence the behavior of human players in some scenarios. Then, by simply modifying the CQL by conditioning on human latent strategy representations, the agents can learn to adapt to changes in human behaviors.
Strengths: The research purpose of this work is an important topic in the field of AI, how to improve human performance in human-agent collaboration. The author's attempt to verify the effectiveness of offline reinforcement learning techniques in human-agent collaboration scenarios is commendable.
Weaknesses: 1. After carefully reviewing the manuscript, I personally think the current content (methods and experimental results) does not effectively demonstrate the thesis that offline RL can learn to guide humans toward better performance by combining human latent policy representations. I agree that the agent may learn to adapt to changes in human behavior through offline RL, but it remains unclear how offline RL can tangibly influence and improve human performance. Furthermore, there is a wealth of research in the field of human-agent collaboration, such as Du, et al.[1] and Alamdari, et al.[2], focus on developing assistive agents to improve human performance. It would be advantageous to discuss this research in the paper to provide a more comprehensive understanding of the field. In the experimental part, the existing experimental results (improved team rewards & few examples) cannot support this assertion either. I suggest that the authors provide some objective metrics of human participant performance to make the conclusion more solid.
2. User Study is crucial in human-agent collaboration research, and I’d suggest the authors provide additional information, including:
- Ethical review: Did participants provide informed consent for their involvement in this research and for the use of their data toward this project? Were they fully informed about the purpose of the research, did they confirm their approval for involvement, were they told about how to withdraw their data?
- Test settings: What is the proficiency of the participants in the game? Were they all novices or professionals? Whether to provide a standard test specification or guide before the test to ensure the consistency of the test purpose?
3. Recent research in the field of human-agent interaction has highlighted the significance of evaluating agents using both objective and subjective metrics. See Strouse, et al.[3], and McKee, et al.[4]. Incorporating subjective metrics into the evaluation of agents can provide a more holistic understanding of their impact on human performance and well-being. For example, did participants prefer playing with the trained agent over other baseline agents? During the collaboration, did the participants perceive their actions to be influenced by the agent's behaviors? If yes, to what extent? Did all participants perform better as a result?
4. This work attempts to verify the effectiveness of existing offline RL techniques in some human-agent collaboration scenarios. Due to the lack of comparisons with state-of-the-art methods in the field of human-agent collaboration, such as Strouse, et al.[3], Yu, et al.[5], etc., it remains unclear where the boundary of offline RL in human-agent collaboration is.
5. There are some unevidenced and unsubstantiated claims in the manuscript. For example,
- In line 208, "It is evident that naive offline RL cannot learn adaptive policies."
- In line 179, "humans are passive and will often only respond to what their partner is doing"
6. Minor Concerns:
- Many references are not uniformly formatted, such as [27] "nature" and "Science" , whether conference abbreviations, e.g.: "(ICLR)", are reserved.
- In line 369, "Thereofre" should be "Therefore"
- In line 412, "simulataneously" should be "simultaneously"
[1] Du, et al. Ave: Assistance via empowerment. 2020.
[2] Alamdari, et al. Be considerate: Avoiding negative side effects in reinforcement learning. 2022.
[3] Strouse, et al. Collaborating with Humans without Human Data. 2021.
[4] McKee, et al. Warmth and competence in human-agent cooperation. 2022.
[5] Yu, et al. Learning zero-shot cooperation with humans, assuming humans are biased. 2023.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: See the revision recommendations in the "Weaknesses" section.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: As discussed above, this work lacks the comparison of SOTA methods, and the user studies are not sufficient, which would limit its reliability.
Flag For Ethics Review: ['Ethics review needed: Responsible Research Practice (e.g., IRB, documentation, research ethics)']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. The main concern raised in the review seems to center around the evidence that supports our conclusion. We hope to remedy this by providing additional evaluations that further reinforce our hypothesis that the human does improve their behavior during evaluation trials (implying successful influence), and offer some interpretability regarding the latent representations. We go into these additions in more detail below, and are happy to compute any others that you may suggest. You also asked for more details regarding the evaluation and discussion with related works that we offer below and will include in our updated paper. Please let us know if our response fully addresses the issues in your review.
**Current content does not effectively demonstrate influence**
We understand your viewpoint that the improved performance may not be due to successful influence. Therefore, we provide the following metrics to make our conclusion more solid. In our opinion, these are the most reasonable metrics we could compute using our current evaluation data:
1. In addition to showing the cumulative reward obtained, we show the difference in cumulative reward obtained between the first and last 100 timesteps of each evaluation. As shown in Figure 1 (of the attached file), the reward obtained towards the end of evaluation is much higher than in the beginning. Since the learned agent only changes behavior when the human partner does, this means that the human must have adapted their behavior over the course of evaluation.
2. To confirm that the learned agent is actually trying to influence its partner, rather than solve the task itself, we evaluate all the considered approaches against a scripted policy that always performs the suboptimal action we would expect from a naive human (such as grabbing onions when tomatoes yield higher reward). In Figure 2, we see that the learned agent using our approach actually performs worse than naive offline RL. This means that our learned agent cannot be solving the task more successfully independently, but rather is trying to influence its human partner.
3. To further support our hypothesis that using latent representations of strategy helps performance, we additionally show that the learned latent strategies map to different high-level actions. In Figure 3, we plot several latent strategies inferred from trajectories during evaluation (using PCA to reduce to 2 dimensions). We then colored the points that correspond to trajectories where the human partner goes to get tomatoes as red, and ones where the human goes to get onions as green. The plot shows that the red and green points are prominently in different clusters.
If you have ideas for other metrics that we can compute to support our conclusion further, we are happy to include them in the updated paper. Your suggestion to include user feedback regarding which learning agent they preferred is very useful. However, this requires redoing our evaluation, which we could not do in the rebuttal but would be happy to do in the future if you feel it would strengthen our evidence.
**Discussion with other related works**
Thank you for bringing up these additional papers. We will include a discussion of them in our updated paper. Though the papers also look at coordination with humans, we view our paper as tackling a different problem, in that we consider influence rather than assistance. In the latter, the learned agent will always try to support the human’s intentions or strategy, whereas in our work, the learned agent tries to change the human partner’s strategy if it is suboptimal. Therefore, the methods proposed in the works you referenced would not be applicable in our setting, where the humans are not only suboptimal due to unfamiliarity with controls, but also due to having the wrong idea of what high-level action to take.
**Additional details on user study**
Thank you for raising this important concern. We provide the additional details below and will include them in the updated paper.
Ethical review: Our user study is IRB-approved. The participants did provide consent and were informed about the high-level goal (of influence during coordination tasks) but not of the methodology that we used or our specific hypothesis (that pure offline RL can achieve successful influence). Per our IRB protocol, there is no option to withdraw the unidentified data.
Test settings: The participants are not familiar with our specific game setting, though our controls are fairly intuitive and we provide detailed instructions on how to play. We found that the participants quickly adapted to the unfamiliar controls and learned to pilot their characters effectively. Therefore, much of the suboptimality came from not knowing the optimal high-level action to take.
**Lack of comparisons with state-of-the-art methods**
In this work, we considered methods that were purely offline (can only use human-human data) and did not have access to an environment or human simulator. We also specifically study the setting of influence in human-AI collaboration. In this setting, we believe that the Offline LILI baseline is the current state-of-the-art approach.
However, we do think that the self-play approaches you referenced, though they assume an environment simulator, are interesting to study as they are commonly used to solve general collaboration tasks. We believe that self-play approaches fail when the human partner is strategically suboptimal, and the learned agent can account for that by influencing the partner to change their strategy. Though Yu et al. consider biased humans, they do not consider that human behavior can change within a trajectory, thus not allowing for influence like we do. In Figure 4, we reported results for an agent learned via self-play (FCP), and see that it does not perform as well as ours when the human has a misaligned objective.
---
Rebuttal Comment 1.1:
Title: Let us know if you have any additional concerns
Comment: Thank you for your review and comments. We hope that our additional evaluations and rebuttal have addressed your primary concerns with our paper. We would really appreciate feedback as to whether there are any (existing or new) points we have not covered, and we would be happy to address/discuss them!
---
Rebuttal Comment 1.2:
Comment: I believe that detailed and comprehensive user studies are crucial to support the authors' claim that "agent can influence and improve human performance". However, the current user study results are not sufficient to support this conclusion, so I will maintain my recommendation. | Summary: This paper proposes a novel framework to empower an intelligent agent with the capability to effectively influence suboptimal human behavior during interactions. The primary objectives revolve around tackling two key challenges: (1) deducing a new strategy to influence human action and (2) learning to influence the human's long-term latent strategy. To evaluate the proposed framework's efficacy, a set of collaborative tasks (i.e., Overcooked environment) are employed. The experimental results demonstrate that the agent can successfully learn to influence human behavior based on a human-human interaction dataset, even though no instances of successful influence were present.
Strengths: Generally, I find this paper to be well-motivated, accompanied by clear and easily understandable examples. The organization and presentation of the content are well-structured, making it easy to read and comprehend. The focus of the paper lies in addressing the critical issue of enabling intelligent agent to influence suboptimal human behavior which is expected to be a topic of significant interest in the human-AI interaction research field. The proposed approach to employing offline reinforcement learning to tackle the problem is innovative such that it successfully extends the application of offline reinforcement learning in a novel context. The evaluation section is comprehensive and thorough, providing compelling evidence that supports the efficacy of the proposed method.
Weaknesses: First, for the example in line 66, if we consider the real world scenarios, such behavior (i.e., repeatedly blocking the human from doing something) without providing any explanation would largely affect the human trust in the robot. This example appears to assume that the human will naturally respond by adjusting their strategy to avoid that ingredient in the future. However, this implicit assumption is quite strong and may not hold true in real-world situations.
As the authors mention in the last section, the experiments were conducted on simple simulated game-like environments rather than real world scenarios. It would be intriguing to see how the proposed approach performs when applied to more complex real-world tasks involving human-AI interactions.
Furthermore, when we talk about human-AI interaction, it includes both collaborative and competitive interactions. However, the scope of the paper is focused solely on collaborative settings, without investigating or evaluating the approach in competitive scenarios. Expanding the research to encompass competitive interactions may provide a more comprehensive understanding of the proposed method's versatility and potential applications.
Finally, in the evaluation section, the authors compare the performance between BC and offline RL methods. I feel that the paper could be further strengthened by incorporating a comparison with prior works that approximate human decision-making models and subsequently learn to influence human behavior using the learned model. This broader comparison would enhance the paper's context and shed light on the relative advantages and contributions of the proposed approach in the context of existing research.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: 1. When the agent tries to influence human behavior, is it possible that the human gets annoyed, particularly if the influence becomes excessively insistent, dominating, or aggressive? Such a scenario could potentially lead to negative reactions from the human and undermine the effectiveness of the interaction.
2. In the provided example, the authors mention the human goes to plate when he sees the plate is right next to him. This situation seems to assume that the human has a myopic perspective. I am wondering whether the proposed approach is generalizable if the human suboptimal behavior can be attributed to other various possible reasons.
3. The agent's objective is to elicit the expected human behavior. However, the agent may generate positive action (e.g., pass the plate) or negative action (e.g., block the way). These actions may significantly impact human perception and trust. How can you ensure that the agent influences human behavior effectively without causing harm to the team?
4. Does the agent's active influence imply a leadership role, as it takes on a guiding role in shaping the collaborative decision-making process?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: I appreciate the authors for the limitation and ethical implications section in the paper. Additionally, given the paper's objective of enabling the agent to influence human behavior and long-term strategies, it is essential to consider potential impacts on human trust and the possibility of unintentionally biasing the human towards a worse policy. These factors warrant serious consideration to ensure that the proposed approach fosters responsible and ethical human-AI interactions. Addressing these concerns would contribute to a more comprehensive understanding of the potential implications of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We agree with you regarding expanding the scale and complexity of our evaluation. We view our work as a first-step in showing that influence can be done purely offline (without any environment or human simulators), and aim to show more compelling examples of this in the future. You also raised some interesting points that we aim to address below:
**Assume that humans will naturally respond by adjusting their strategy**
Note that our algorithm for influencing human strategies/policies (such as blocking an ingredient to change the person’s role in the future as well, as in the example the reviewer mentions) will only learn what appears in the data – these changes are only assumed to stick if they stick in the data, i.e. that latent transition (or similar) is observed. Of course, Offline RL can then stitch transitions in novel ways, but it will not assume a latent state / strategy transition is possible unless the data shows it. The version of our algorithm that influences human actions through state changes does not assume long-term changes in the human’s policy (but this can be unrealistic sometimes, which is why accounting for strategy change is actually needed). That said, we certainly agree that this needs to be tried out in more realistic settings. We are excited about that, and at the same time point out that this is a step far and beyond prior influence via RL work which has not studied real humans at all.
**Expanding to competitive interactions**
Though competitive settings are prevalent and interesting to study, we believe they are less relevant for influence because agents can do well by assuming the human is optimal and computing a best-response (see, for instance [cite the rohin-micah paper]). However, we agree that even in competitive settings, an agent can potentially learn to exploit a suboptimal agent. But our goal from the start has been to show how we can improve human suboptimal behavior, rather than exploit it.
**Incorporating a comparison with model-based works**
In this work, we considered methods that were purely offline (can only use human-human data) and did not have access to an environment or human simulator. We believe that this limiting setting is important to study to find approaches that work in complex, real-world settings.
In the broader domain of human-AI collaboration, there are human-aware RL methods, which learn a human simulator from data and compute a best response via RL in an environment simulator [cite]. However, the topic of _influence_ has not been studied with these methods. Our work can be seen as paving the way towards studying influence in these settings where access to the environment model can be assumed, and our results show promise for these types of methods as well. In fact, the techniques from Offline RL that protect against distribution shift might very well apply to that setting as well, where driving the human model OOD could lead to poor real world performance.
There also exist approaches that use self-play, thus circumventing requiring a human simulator and only an environment one (which is more reasonable to obtain in practice). We believe that self-play approaches fail when the human partner is strategically suboptimal, and the learned agent can account for that by influencing the partner to change their strategy. In Figure 4 (of the attached file), we reported results for an agent learned via self-play (FCP), and see that it does not perform as well as ours when the human has a misaligned objective.
**Answers to questions**
The reviewer also raised questions that we aim to answer below:
1. Whether or not the human gets annoyed is an interesting consideration. We have not measured the users’ subjective perception in the study, but can report that whatever that perception was, it did not negatively affect the effectiveness of the interaction – we see that the collaborative task performance goes up. That said, the reviewer is 100% right that this could get annoying, and further work should examine tools to assess online whether influence is working on the individual, and perhaps adapt the type and amount of influence to the user’s response and even preferences. .
2. The myopia point is a fascinating one, thank you for bringing this up. It does seem that real people tend to be reactive in this way, and our algorithm can leverage that. But no, it does not seem that the method would be limited to myopic bias. For instance, if people have partial information about the state and that’s why they take suboptimal actions, the agent might figure out that putting certain objects in their field of view might make the user aware of them and therefore change their actions. Or more speculatively, imagine a language interaction where the agent tries to influence people to donate to a charity – there biases like a propensity to anecdotal evidence might occur, and our method would be able to pick up the fact that people behave differently once they hear such anecdotes. Like in the first question, one might expect the effects to show up in the task performance, which the agent is optimizing. The counterargument is that not annoying users should be a first order citizen in the optimization objective – we certainly agree and think it would be useful to study reward learning applied to this type of “influence”-related preferences.
3. Yes, it is assumed that the human is suboptimal and the agent can help them overcome this, which implies the agent has better knowledge or computation than the human, putting them indeed in a leadership role. We believe when this is not true, it is more effective to learn AI agents that assist with the human’s strategy, which is an orthogonal paradigm that has rich prior literature.
---
Rebuttal Comment 1.1:
Comment: I really appreciate the responses, which addressed some of my concerns. Having reviewed the rebuttal and other reviews, I agree that further experiments and real user studies would strengthen the work. At the same time, it's noteworthy as an initial endeavor to influence human behavior through offline RL with human-human interaction. I would maintain my score. | Rebuttal 1:
Rebuttal: Based on reviewer feedback, we have performed multiple additional evaluations. We reference each of them in our individual responses to each reviewer, but also provide an overview of the new results below.
In Figure 1, we compute the reward improvements of our proposed method across all the experimental layouts. This improvement is computed as the difference between the reward accumulated in the last 100 timesteps, and the first 100. By showing a noticeably positive improvement, we aim to show that our method is successfully changing the behavior of its human partner via influence.
In Figure 2, we evaluate our proposed method (along with all baselines) against a scripted policy that will always perform the suboptimal high-level action, i.e. pick up onions when tomatoes yield higher reward. Here, we see that our proposed approach actually performs worse than baselines. This is to show that the reward improvement is actually due to influence, and not our learned agent solving the tasks more successfully independently.
In Figure 3, we visualize the learned latent strategies (as 2-dimensional embeddings obtained via PCA). Here, we show that different high-level actions map to different points in the space. This is to provide some interpretability on the learned latent strategies.
In Figure 4, we add an evaluation of an agent learned via self-play (specifically Fictitious Co-play). Because the self-play approaches often do not account for suboptimal partners, the approach performs much worse than ours in the generalization tasks where the human partner is guaranteed to be strategically suboptimal.
Pdf: /pdf/0b7d7668f0edf3791aa089baaccdb5d40404dc84.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
CLeAR: Continual Learning on Algorithmic Reasoning for Human-like Intelligence | Accept (poster) | Summary: This work tackles the problem of continual learning for rule-based tasks. The method first maps algorithmic inputs to a discrete latent space. This mapping is one-to-many, which allows the method to learn multiple features for the inputs of each task. A model then consumes results from this discrete latent space to perform the task. The authors test and verify the method works on various algorithmic tasks drawn from the Chomsky hierarchy.
Strengths: - The authors define a new set of problems for continual learning
- The approach for mapping inputs to learned features makes sense
- Continual learning is becoming more prevalent as models become more expensive to train from scratch, so the work is timely
Weaknesses: - The framing uses catastrophic forgetting, but whether catastrophic forgetting is a pressing issue in today’s deep models is an open question. We do not know how increasing model scale impacts catastrophic forgetting, and deep networks like GPT-3 are fine-tuned often in production. Recent empirical results on catastrophic forgetting in neural networks are not as dramatic as the authors imply, and these works should be cited. See my comments in the questions section for references.
- It’s unclear how the experiments in the paper map onto real world problems. Can the authors say more about this? When would we want to train a model sequentially over algorithmic tasks?
- It seems CLeAR gets outperformed by multitask and single task training. Is there a baseline that would demonstrate the lift of this approach?
See questions for other concerns.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Major questions:
- Line 229: How do you control the size of the mapping set?
- How much of the performance is explained by learning regular languages well and nothing else? For example, in the context-free tasks, are the models just solving the subset of examples that can be modeled via a regular language?
- Is the CL setting better than the pretrain-finetune or meta-learning settings, where we might pretrain/meta-train over multiple tasks and then adapt to one task?
Minor
- Line 24: The decline is not necessarily rapid. See [this](https://arxiv.org/abs/1910.08475) [this](https://arxiv.org/abs/2102.07686) for more recent citations to catastrophic forgetting.
- Line 113: “the model performing backward transfer is still limited.” What does this mean?
- Line 126: Missing citations to [this](https://arxiv.org/abs/2106.16213) [this](https://aclanthology.org/2020.acl-main.43/) and related works on the languages that can be recognized by transformers and RNNs. These works essentially define the upper bounds to your empirical work.
- Table 3: What is the logic behind the bold numbers here?
Nits:
- Line 16: typo “during following”
- Line 83: could you spell out LwF?
- Line 365: duplicated references
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: I would argue that an additional limitation is the latent space design itself. More accurately, the discrete latent space comes with tradeoffs. Because the method does not define a distance metric over the discrete latent space, there is no notion of exemplars or typicality in the mapped sequences. By design, all the sequences represent features of equal importance. However, one might presume that for some tasks, certain features of task deserve a higher weighting than others. This seems easier to solve in a continuous latent space, where you can map the importance of a feature onto distance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank you for the positive comments and suggestions. We have addressed each of your questions below.*
***
## Weaknesses
**W1** We appreciate your insightful analysis on catastrophic forgetting, especially on recent LLMs. Following your suggestion, we have incorporated the references you mentioned into our work (Global Comment Literature Reviews 2).
***
**W2** So far, CL-AR has been a challenging field where deep learning models have not yet demonstrated practical real-world performance, making it difficult to apply in everyday life.
In the most ideal scenario, a robot, like a human, continuously acquires logical knowledge while integrating previous knowledge without losing it. In more practical fields, medical/industry datasets containing information that requires logical reasoning may be benefitted from CL-AR. Especially when it's not possible to define the task all at once from the beginning or access to previous data is prohibited due to privacy issues.
In a broader scope, we believe that our method of continually training logics could contribute to future LLM training like Open AI is now trying to enhance its performance on reasoning tasks through process supervision.
Ultimately, we expect that through the challenging CL-AR learning scenario that current neural networks struggle with, we may gain insights into how humans learn and integrate knowledge in the brain.
***
**W3** To establish a baseline for assessing the performance of our model, we attempted the following three approaches.
First, we compared the performance of our CLeAR method with the Conventional CL algorithm as a baseline. The performance of the EWC, LwF, and ER methods is indicated in Supplementary D, demonstrating that our CLeAR significantly outperforms these methods.
Second, to further demonstrate the lift of our approach, we calculated the accuracy difference after the initial training of each task and after all tasks, which indicates how much information the model lost by CL. CLeAR not only retained nearly all information but also frequently exhibited backward transfer of information to previous tasks.
Third, according to the paper “A continual learning survey:Defying forgetting in classification tasks(2021)” multitask training (joint training) is considered as a soft upper bound in CL scenario. And for most of the conventional cl methods, there is a considerable performance gap between multitasking training. Although our work is the first paper on CL scenarios in a completely new field, approaching joint training performance still indicates excellent continual learning is taking place. In fact, our model exhibited very high performance close to joint training and even demonstrated impressive results surpassing joint training performance on multiple tasks.
***
## Questions
**Q1** We apologize for the insufficient information transfer to the model. We understood the “mapping set” you mentioned as the dimension of mapping space. The dimension of the mapping space is a hyperparameter determined by us, similar to the model's hidden dimension. It is advisable to set a larger value for the dimension when the number of alphabets in the input data is high. Our mapping function aligns the raw input sequence to the defined mapping space dimension. Furthermore, it is possible to increase the dimension of the mapping space during CL to readjust its size.
***
**Q2** According to formal language theory, tasks from the upper Chomsky hierarchy (context-free tasks) can not be represented by grammar on the lower Chomsky hierarchy (regular grammar) or solved by corresponding automata (Finite-state automata).
Therefore, even if finite state automata produced correct answers in some samples, It is appropriate to attribute this to random chance. However, theoretically, a neural network is a universal approximator capable of solving all problems, akin to a Turing machine. Consequently, if a neural network generates correct answers for certain samples, further analysis is necessary to determine whether it is due to random chance or if the network has indeed learned certain aspects of the task.
***
**Q3** CL, pretrain-finetune (P-F), and meta-learning (M-L) address different learning scenarios. It seems difficult to say which setting is better. In the context of AR, the goal is to excel at individual tasks rather than creating a generally superior model, which is why we found M-L incompatible. Additionally, due to the shifting data distributions for each task and the need to adapt well to previously learned tasks, we considered CL more suitable than P-F. During this rebuttal, we conducted additional experiments and discovered possibilities that pretraining on multiple tasks might not be advantageous in learning multiple AR tasks, making our CL-AR setup worthwhile. (Please refer to **General Response Experiment 2 and iPYe Q1** for detail).
***
**Q4** Apologies for the insufficient description. This statement implies that even recent CL methodologies find achieving positive backward transfer quite challenging. Recent methods such as AFEC (NeurIPS 2021) achieve only slight positive backward transfer on simple image datasets.
***
**Q5** Thank you for introducing important papers related to this study. As per your comment, we will make sure to cite the two mentioned papers in the related work section of the final version (**Global Comment Literature Reviews 3**).
***
**Q6** For the CL Finals column, the emphasis is on results above 90%, and for BWT and FWT columns, on positive values. However, the non-bold case does not imply low CL performance. Due to the text limit, please refer to **iPYe W1** for a detailed explanation.
***
## Limitations
**L1** We agree with your statement. Expanding this work into distance-preserving mapping in a continuous feature space would be meaningful. As part of future work, we intend to explore our continuous feature space mapping using models such as diffusion models.
---
Rebuttal Comment 1.1:
Title: Looking forward to your post-rebuttal feedback
Comment: Dear Reviewer aCtv
Thank you again for the insightful comments and suggestions. Since the deadline for our discussion is approaching, we sincerely look forward to your follow-up response. We are happy to provide any additional clarification that you may need.
For your convenience, we provide a summary below:
* #### A description has been provided about how CL-AR tasks will be utilized in the real world, with connections to LLMs and human learning.
* #### Discuss the phenomenon of CL often surpassing multitask learning In CL-AR (multitask learning is a soft upper bound in traditional image-based CL).
* #### To systematically analyze the advantages of CL, we newly devise a novel approach to analyze what kind of hierarchical representation or compression of previous "knowledge" is happening to enable CL (attached **PDF & iPYe Q1**).
* #### The method for adjusting the size of the mapping set, and whether context-free tasks can be explained by regular languages from the perspective of formal language theory.
* #### References have been added to LLM's catastrophic forgetting and previous AR-related research.
* #### Enhanced the aspects of the paper where the explanation was insufficient.
We hope that the provided new experiments and the additional discussion have convinced you of the merits of this paper. Please do not hesitate to contact us if there are additional questions.
Meanwhile, we thank you for your very helpful comments. It would indeed make our paper clearer and stronger.
Thank you for your time and effort.
Best regards, Authors
---
Rebuttal 2:
Title: A kind reminder
Comment: ### Dear reviewer aCtv
We wanted to kindly remind you that the interactive discussion phase will be ending in just a few hours. Unfortunately, we won't be able to engage in further discussions with you after the deadline. We hope that our response has addressed your concerns, and turned your assessment to a more positive side. Please let us know if there are any other things that we need to clarify.
We thank you so much for your helpful and insightful suggestion.
Best, Authors | Summary: This paper introduces CLeAR, a novel algorithmic reasoning (AR) methodology for continual learning (CL) of abstract logical concepts. The main component of CLeAR is a one-to-many mapping strategy that aligns various tasks of different dimensions in a shared mapping space, allowing for the gradual learning of abstract concepts. The paper addresses the challenges specific to CL for AR tasks, such as decorrelated input data, dynamic dimensions of datasets, and the goal of generalization for out-of-distribution data. Extensive experiments are conducted, demonstrating that CLeAR achieves near-zero forgetting and improved accuracy during subsequent training, outperforming existing CL methods designed for image classification. The paper highlights the importance of studying CL in the context of abstract logical concepts and offers valuable insights into human learning mechanisms.
Strengths: The paper introduces a novel methodology, CLeAR, for continual learning of abstract logical concepts. It addresses the lack of research in this area and presents a unique approach to aligning tasks of different dimensions and sharing semantics. By focusing on abstract algorithmic reasoning tasks, the paper bridges the gap between CL and real-world cognitive skills development. The proposed one-to-many mapping strategy addresses the challenges unique to abstract concept learning and offers a solution for preventing forgetting and improving accuracy during subsequent training.
Weaknesses: From my point of view, I'm not sure why end-to-end algorithmic reasoning is important when this problem can easily be solved via symbolic reasoning with pre-defined rules and structures. Also, I think the paper should include some other baseline experiments, like standard CL methods, for comparison. Foundation models can also be considered if possible. This would provide a better understanding of the performance improvement achieved by CLeAR and allow for a more comprehensive assessment of its effectiveness.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could the authors provide further insights on why end-to-end approaches are necessary or beneficial in comparison to symbolic reasoning with pre-defined rules and structures?
- Have the authors considered including other CL methods or foundation models in the evaluation? I guess most of the tasks listed in the paper can be solved by models like gpt4. So why learning Algorithmic Reasoning important?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank you for the positive comments and suggestions. We have addressed each of your questions below.*
***
## Weaknesses
**W1** (Aligned with Question 1) We appreciate your insightful comment. Frameworks such as **DeepProbLog (2018), HOUDINI (2018), NeuralTerpret (2017)**, and others have been developed using symbolic reasoning with predefined rules and structures to solve AR. We intend to explain the significance of our end-to-end algorithmic reasoning by focusing on the differences between our work and these frameworks.
Firstly, our model does not rely on any human (prior) knowledge about the tasks that the model will learn. The model purely performs "Learning from data" without knowing what logical tasks it is dealing with. In contrast, the aforementioned frameworks require pre-configured programs that handle logic using human knowledge of each task. This contradicts the assumption of continual learning, as it is impossible to anticipate and program for all potential problems that may arise in the real world.
Secondly, our paper performs logical operations using a pure neural network. While the aforementioned frameworks do utilize neural network structures internally, they mainly use them as feature extractors, such as extracting features from MNIST digits or recognizing arrow directions. Even in lifelong learning scenarios presented in those frameworks (e.g., tasks like summing two images to performing arithmetic operations on multiple images), the neural network acts as a feature extractor, while pre-defined programs perform the actual logical operations.
Lastly, the mapping methodology proposed in our paper allows for continual learning of multiple tasks with different dimensions, relying solely on data without any prior knowledge about the tasks. While the aforementioned papers also introduce sequential learning scenarios, they lack scalability as they either rearrange modularized programs for each task or create predefined parameterized programs using knowledge of all the future tasks from the beginning.
Our research proposes a method where the model can learn any task without the need for human intervention to change the model's structure for each task. This is similar to how a person who has learned arithmetic can smoothly integrate previous knowledge when learning linear algebra without rewiring the brain, approaching the key goal of CL, which is to learn consecutive tasks like a human.
***
**W2** We completely agree with what you said. The comparison results with the three baseline methods are currently only presented in the supplementary section D. As a result, we observed that CLeAR outperforms all other CL methods in terms of final CL accuracy and knowledge backward transfer. In the final version of the paper, we will mention the comparison results of the three baseline CL methodologies, **EWC (Table 3 supple.), LwF (Table 4,5 supple.), and ER (Table 6 supple.)**, in the main paper.
***
**W3** We appreciate your inspiring comment. Recently in the field of CL, there is a growing trend of using foundation models like LLM (large language model). Based on your comment, first of all, to obtain a deeper understanding of our model's functioning, we conducted a more comprehensive assessment using the new analytical methods that we propose. Through this, we have discovered that our CLeAR learns reasoning by internally simulating automata and is capable of integrating automata states for new tasks. Additionally, we have analyzed that it can exhibit higher learning capability than what is typically considered an upper bound in joint training in the context of Continual Learning (Global Comment Experiments 2). In future work, as LLM has demonstrated high performance across various domains, applying a foundation model to CL-AR tasks would be a novel research direction. In future work, we will attempt CL-AR task learning and evaluation using the LLM approach, utilizing resources like the gpt4 API or Alpaca (2023).
***
## Questions
**Q1** Explained in W1
***
**Q2-1** As mentioned in W2, we compared the performance of three baseline CL methods using three different mapping methods and provided the results in the Supplementary Material.
***
**Q2-2** In addition to its pure academic value in studying the process of logic learning in models, AR also holds significance in relation to LLM for the following reasons.
According to recent research on Transformer architecture and algorithmic reasoning (ref[11,24] in main and "Simplicity Bias in Transformers and Their Ability to Learn Sparse Boolean Functions" (2022)), Transformers often struggle with generalization to out-of-distribution (OOD) data of varying lengths due to their tendency to learn shortcuts in AR tasks.
For instance, regardless of the number of model heads, hidden dimensions, or training epochs, and the type of positional embedding, transformers fail to learn algorithms of very simple (for human) tasks like Parity check (PC: whether the sum of a binary sequence is even or odd). Despite showing 100% accuracy on test sets of the same length as the training data, the model's accuracy drops to a random chance of 50% for longer OOD sequences. This is attributed to the Transformer architecture's focus on memorizing input data distribution rather than learning rules, leading to shortcut learning. From this perspective, LLMs with Transformer architecture could also encounter similar challenges. In a broader scope, we think that a methodology of continually training our AR task could potentially offer new insights to enhance the model architecture of LLM like OpenAI's recent attempts of step-by-step learning through process supervision.
Ultimately, we expect that through the challenging CL-AR learning scenario that current neural networks struggle with, we may gain insights into how humans learn and integrate knowledge in the brain.
---
Rebuttal Comment 1.1:
Title: Looking forward to your post-rebuttal feedback
Comment: Dear Reviewer GYfA
Thank you again for the insightful comments and suggestions. Since the deadline for our discussion is approaching, we sincerely look forward to your follow-up response. We are happy to provide any additional clarification that you may need.
For your convenience, we provide a summary below:
* #### Introduction to the conventional neuro-symbolic framework with NNs and why these methodologies are infeasible in CL scenarios, leading to the clear necessity of using our pure NN-based CLeAR approach (with **Literature reviews 1 in global comment**).
* #### Enhanced comparison with baseline performance of conventional CL methodologies.
* #### Development of a novel assessment method to achieve a more comprehensive evaluation of the performance improvement achieved by CLeAR, along with in-depth analysis using this method (details in the attached **PDF & iPYe Q1**).
* #### The weaknesses of the current transformer model in handling AR tasks and the potential for CL-AR models to be utilized in training reasoning capabilities of future LLMs.
We hope that the provided new experiments and the additional discussion have convinced you of the merits of this paper. Please do not hesitate to contact us if there are additional questions.
Meanwhile, we thank you for your very helpful comments. It would indeed make our paper clearer and stronger.
Thank you for your time and effort.
Best regards, Authors
---
Rebuttal 2:
Title: A kind reminder
Comment: ### Dear reviewer GYfA
We wanted to kindly remind you that the interactive discussion phase will be ending in just a few hours. Unfortunately, we won't be able to engage in further discussions with you after the deadline. We hope that our response has addressed your concerns, and turned your assessment to a more positive side. Please let us know if there are any other things that we need to clarify.
We thank you so much for your helpful and insightful suggestion.
Best, Authors | Summary: The authors propose a training framework for continual learning in which task inputs are mapped to regions of or distributions over an embedding space, before the resulting embedding is passed to a single model that learns continually to solve the tasks. They apply this continual learning framework to tasks based on learning algorithms, testing against input-output pairs from outside the training distribution. The resulting training and task framework can solve a number of algorithmic reasoning tasks, up to the limitations imposed by model architectures.
Strengths: The paper is one of the first I've seen to tackle algorithmic reasoning/algorithm induction tasks along with continual learning. The core idea of mapping tasks and inputs to an embedding space from which a unified model learns continually to solve tasks (via a task-specific output head) is, to my knowledge, highly novel. The paper is duly careful about which levels of the Chomsky hierarchy it considers as algorithms for purposes of defining the algorithmic reasoning task.
Weaknesses: The paper needs to more clearly present its comparisons between a baseline and a contribution. I understand that the authors do not claim every task on which they evaluated to be "solved", especially since many of them require positional encodings and a context-free or context-sensitive memory. However, since the message is more complicated, the authors could do a better job presenting exactly where their training framework and meta-model architecture makes a difference against, for instance, just training the baseline neural network architectures with no continual learning, or training them and then retraining with a baseline level of catastrophic forgetting.
The authors have since conducted a number of additional experiments to address this weakness.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: Where do the authors think some form of hierarchical representation or compression of previous "knowledge" is happening to enable continual learning?
The authors have submitted supplementary material on exactly this question.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 4 excellent
Limitations: The authors duly discuss the chief limitation of their approach being a discretization in the embedding space.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank you for the positive comments and suggestions. We have addressed each of your questions below.*
***
## Weakness
**W1** Thank you for the insightful comments. We apologize for the insufficient explanation of the experimental results. As you suggested, we will address these points in the final version of the paper.
**Explanation**
In single task training (Table 1 main), an accuracy exceeding 90% was considered that the model learned the AR. In such cases, the model possesses accurate algorithmic information, making it easier to transfer this knowledge to subsequent tasks. However, many models and tasks exhibit low accuracy in single task training. To make CLeAR an effective CL methodology, we believe it is crucial to preserve even partial information (ACC < 90%) when the model is not able to learn the algorithm perfectly and transfer it successfully. Therefore, we conducted experiments even for challenging tasks with low single-training performance. Performance of CLeAR is evaluated through CL Initial, CL Final, and BWT values. The average of single task training results for each task and the joint training performance, which is generally considered an upper bound in typical CL scenarios, provide insights into the difficulty of tasks, rather than just CLeAR's performance.
Across our experiments, CLeAR demonstrated the ability to preserve even incomplete information (CL Initial < 90%) for challenging tasks with a high Chomsky hierarchy. First, for difficult tasks, the CL Final score did not significantly decrease compared to the initial CL Initial score, and in many cases, performance even increased through BWT. Second, CLeAR outperformed the baseline of existing CL methodologies (supple. D).
From these observations, we can conclude that our CLeAR methodology is valid not only for learning individual tasks but also for difficult tasks with a high Chomsky hierarchy.
***
## Questions:
**Q1** We sincerely appreciate your crucial question. Thanks to your comment, we devise new analytical methods to address a crucial yet challenging aspect of CL: how the model preserves and integrates information from previous tasks while incorporating new knowledge.
In addition, we found interesting results with your comment (W1). Unlike typical CL scenarios where joint training is an upper bound, our experiments often showed lower results compared to CL. Analyzing the model's internal learning process in joint training, we found that while CLeAR acts as a regularizer, integrating information between tasks, joint training in CL-AR fails to find common information.
These new analyses advance the paper a step further and highlight the significance of CL in AR tasks, similar to human learning logic in real-world situations.
**Experiments (Fig.R1 and Fig.R2 in PDF)**
We conducted experiments with two simple regular tasks, Parity-check (PC: whether the sum of input is even or odd) and Even-pairs (EP: whether the count of two consecutive digit pairs, 01 and 10, is the same), both with binary inputs, using a fixed seed and an RNN model. In both CL and joint training, 100% IID and OOD accuracy was achieved.
From the perspective of formal language theory, learning each task requires acquiring the automata structure (Fig.R1 D) internally. Fig.R1 A, B, C, E represent the model's hidden feature space colored corresponding to PC (up) and EP (down) automata states for the input sequence. Fig.R1 A shows that after learning PC, the model has fully acquired the corresponding automata (up), while for the unlearned EP, the points remain undivided (down). Fig.R1 B demonstrates that after learning EP via CLeAR, the model maintains information from PC (upper) while learning EP (lower), with minimal change in feature space. The first task acts as a regularizer, preventing inefficiency and maintaining a minimal joint automata state for both tasks.
Fig.R1 C illustrates joint training, where the feature space of the model is separated for the corresponding automata state (up), leading to the use of less effective feature space. Opposed to CL, it failed to learn commonality. This characteristic is more distinct in Fig.R1 E. This is an easy-to-learn feature for the model but doesn't contribute to problem-solving, making it unnecessary to learn. Fig.R1 E shows that while joint training leads to unnecessary clusters, CL avoids learning redundant features. These observations suggest that CLeAR acts as a regularizer in CL-AR, reducing the ineffective use of feature space and integrating information between tasks by combining corresponding automata states.
For more complex context-free (Stack manipulation: SM) and context-sensitive (Bucket sort: BS) tasks, Stack-RNN was used (Fig.R2). In the case of SM, for learning to occur, the model must necessarily internalize the usage of a memory stack. CL exhibited superior results compared to joint training for both SM and BS in IID and OOD test sets. To verify whether the model has indeed learned true algorithms in each case, Fig.R2 B shows the memory stack actions based on the input alphabet. In CL (left), it is evident that the model has accurately learned the stack usage. Furthermore, during the learning process of BS, it retains this knowledge and even gets closer to appropriate actions, resulting in backward transfer. Fig.R2 C displays stack changes based on the sequence in both CL and joint training, revealing more appropriate learning in CL, while what joint training learned is a shortcut for IID, not the actual algorithm.
Through these experiments, CL-AR was shown to integrate task-related information by learning automata internally and positioning similar-role states closely in the feature space. Additionally, CL could be more preserving and robust against shortcut learning compared to joint training. This work showcased the strength of CL methodology in learning logics and proposed a novel approach for analyzing internal learning processes from a CL perspective.
---
Rebuttal Comment 1.1:
Title: Looking forward to your post-rebuttal feedback
Comment: Dear Reviewer iPYe
Thank you again for the insightful comments and suggestions. Since the deadline for our discussion is approaching, we sincerely look forward to your follow-up response. We are happy to provide any additional clarification that you may need.
For your convenience, we provide a summary below:
* #### Strengthened experimental explanation and showed evidence for the strong performance of CLeAR across tasks of various difficulties.
* #### Enhanced comparison with baseline performance without using CL and existing CL methodologies.
* #### Newly devise a novel approach based on CL-AR to analyze what kind of hierarchical representation or compression of previous "knowledge" is happening to enable CL (with **attached PDF**)
We hope that the provided new experiments and the additional discussion have convinced you of the merits of this paper. Please do not hesitate to contact us if there are additional questions.
Meanwhile, we thank you for your very helpful comments. It would indeed make our paper clearer and stronger.
Thank you for your time and effort.
Best regards, Authors
---
Rebuttal 2:
Title: A kind reminder
Comment: ### Dear reviewer iPYe
We wanted to kindly remind you that the interactive discussion phase will be ending in just a few hours. Unfortunately, we won't be able to engage in further discussions with you after the deadline. We hope that our response has addressed your concerns, and turned your assessment to a more positive side. Please let us know if there are any other things that we need to clarify.
We thank you so much for your helpful and insightful suggestion.
Best, Authors
---
Rebuttal Comment 2.1:
Comment: The authors have provided a satisfactory response to my concerns, and I am intrigued to explore their additional experiments.
---
Reply to Comment 2.1.1:
Title: Thank you for your positive reply
Comment: We sincerely appreciate your reviews. We're truly pleased that our responses satisfactorily addressed your concerns.
Thank you very much for raising the score and viewing our additional experiments favorably.
Thanks to your insightful comments, we were able to strengthen the paper significantly.
Best, Authors | Summary: In this paper, the authors introduce CLeAR, a new method of Continual Learning (CL) for Algorithmic Reasoning (AR). In doing so, several relaxations to standard CL are discussed: (1) scenarios where the same data are used for different tasks, (2) the variation of the input, being not of fixed size, and (3) analysis of performances on OOD data. The main algorithm proposed, namely CLeAR, combines a map to a shared feature space with a RCNN or a LSTM where learning happens, and uses an additional MLP to project in the output space. The CL strategy for catastrophic forgetting builds on LwF.
To evaluate the proposed model, the authors propose 15 benchmarks of increasing difficulty, based on the use of different Chomsky hierarchies. The experiments are conducted for the method and several baselines, some of which include known CL strategies. Overall the results show that CLeAR addresses the CL in AR, providing also positive Backward transfer.
Strengths: The authors address for the first time the CL adaptation of AR. The setup proposed differentiates from standard CL, where input data dimensions are fixed, and evaluation is typically done in an IID test-set w.r.t. to the joint task distribution. This leads to a more general setting w.r.t. standard CL which can improve current research and lead to new methodologies.
The proposed method well adapts to variegated AR tasks, being designed to use a shared feature space where learning takes place. The experiments show that this choice has the desired effects, addressing catastrophic forgetting in several scenarios. In addition, the newly designed benchmarks (to be released on github) offer space for further research in this new field.
The presentation is overall clear, with much focus on benchmarking the method in several experiments.
Weaknesses: It seems some information is missing in the current text, and the experimental evaluation can be improved. Referring to Table 3, the discussion in Section 4.2 only mentions the first 4 rows, but nothing is mentioned about the last three. How should those results be interpreted and what do they convey? It seems that the method is not solving the task, accordingly to the authors' claim that the threshold should be above 90% of the task's accuracy.
As a suggestion, the comparison with existing CL strategies is very important and should appear or at least be discussed in the main text.
All experiments are conducted in a single run, contrary to common practice, especially in CL. This may lead to a confirmation bias over the proposed method and should be clarified at least here why the choice of single runs does convey all relevant information. Based on the experiments in Appendix D, I'm not entirely sure that LwF combined with the shared feature mapping would drastically differentiate from CLeAR (Tables 4 and 5), since it is showing very similar classification scores. To this end, I sense that adopting LwF is particularly suited for the task the authors are proposing since it is likely adapting previous tasks to new observed sequences.
**Typo** In the bibliography refs [33] and [34] coincide.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Can you provide more details about why LwF is performing very well in CL-AR? Is it because of the pseudo-labels in the initial training phase?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: **Major** The related work connected to AR is poorly discussed and only one recent work is mentioned in the related work. This gives somehow the feeling that research in AR is niche. To this end, more works should be cited or, otherwise, the motivation for CL in AR should be strengthened.
**Minor** Details on the benchmarks are not entirely clear, making a bit fuzzy the decision behind their design. Such explanation can be improved even in the Supplementary to make understand how different tasks combine in increasing level of difficulty
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank you for the insightful comments and suggestions. We have addressed each of your questions below.*
***
## Weaknesses
**W1** Thank you for your sharp comments. And we apologize for the insufficient explanations in certain parts. As you pointed out, we will make sure to incorporate the mentioned content into the final version of the paper.
The accuracy exceeding 90% in the single training (Table 1) indicates that the model has learned the algorithm perfectly. In this case, the model possesses accurate information about the algorithm, making it easier to transfer this knowledge to the next task. However, to create an excellent CL methodology, we believe it is important to preserve and transfer even partial information (CL Initial under 90%) about the task, even if it is not complete. Therefore, rows 5, 6, and 7 in Table 3 serve as positive indicators that the model can preserve even incomplete information for challenging tasks with a high Chomsky hierarchy. Firstly, the CL final score for difficult tasks did not significantly decrease compared to the CL Initial score, and we observed numerous cases where performance on past tasks actually increased through BWT. Secondly, our model demonstrated higher performance compared to the baseline of existing CL methodologies.
From these points, we can conclude that our model remains valid even for tasks with a high hierarchy where there are limitations in learning just a single task.
Furthermore, to enhance our experimental evaluation, we have paid attention to the phenomenon related to Joint training shown in the results table. These inspection of our result is in **Experiment 2** of the global comment and **Fig.R1** and **R2** of the **PDF**.
***
**W2** We agree with the comment. In the final version, we will include the accuracy of conventional CL that was compared in the supplementary section in the main paper.
***
**W3** Following your comment, we conducted three repeated experiments for all settings by adding two additional seeds.
|% std|RNN|Stack-RNN|Tape-RNN|LSTM|
|---|---|---|---|---|
|Joint|1.14|5.01|6.77|1.76|
|Initia|1.55|1.41| 3.91|1.41|
|Final|1.40|2.25|5.01|2.25|
* Standard deviation from three separate runs of high-correlation CL-AR scenario (Table 3 main)
With the exception of Tape-RNN, which is largely contingent on the initialization, minimal variance was observed, confirming the consistency of the content of this paper. In the final version table, we promise to include the results and variances from the repeated experiments.
***
## Questions
Q1. CLeAR is a specialized model for AR tasks, serving a distinct purpose from LwF. Unlike images, where input distributions share common attributes like color, texture, and patterns, AR tasks present highly distinct input distributions unique to each task. Given the considerable dissimilarity between tasks and their varying dimensions, we introduce a novel approach called CLeAR to effectively adapt to these unique scenarios.
The mapping function, first part of CLeAR, is designed to explicitly align different task inputs. We propose a novel approach to create mapping that can align distributions without prior knowledge of each task adopting Auto-Encoder architecture. This resulted in highly aligned distribution of all tasks (Fig.4-b in main).
Otherwise, LwF assumes image distributions are similar across tasks. While this imperfect assumption leads to outdated performance in the image domain, it fortunately benefited by our mapping function. However, due to the regularization term in LwF that accounts for image distribution shift, LwF does not fully utilize the highly aligned mapping space, resulting in notably lower performance for ten complex tasks (Supp. table 5). On the other hand, our CLeAR consistently exhibits high performance for AR tasks, thanks to its AR specialized label sharpening term.
Table 5 in appendix D shows that CLeAR is more capable of preserving information for long sequential CL. Especially for RNN. Additionally, we demonstrated that our CLeAR methodology surpasses not only LwF but also LwF w/ our mapping.
||CLeAR|||LwF (our mapping)|
|---|---|---|---|---|
|High-correlation|final(%)|forgetting(%)|final(%)|forgetting(%)|
|MA| 60.09% |3.07| 55.37%|43.64%|
|CN| 87.23% |0.46%| 82.95%|11.20%|
|RS | 46.29% |1.52%| 50.30% |2.72%|
|BS| 79.17% |2.73%| 75.31% |4.41%|
* MA: modulus from 2 to 8
* CN: cycle length from 2 to 8
* RS: string alphabet count from 2 to 8
* BS: string alphabet count from 2 to 8
||CLeAR|||LwF(our mapping)|
|---|---|---|---|---|
| Inter-hierarchy|final(%)|forgetting(%)|final(%)|forgetting(%)|
|Ascending|60.22|7.55|59.42|12.07|
|Descending|63.32|2.90|63.05|8.49|
* 10 tasks of EP-CN-PC-CO-RS-SM-IP-OF-BS-DS. Ascending and descending order of hierarchy
Experiments showed that the CLeAR is capable of long and challenging scenario and outperforms LwF (+our mapping)
***
## Limitations
**L1** Thank you for the valuable feedback. We will cite two categories of AR-related papers and add relevant content to the related work section. We include papers such as DeepProbLog (2018), HOUDINI (2018), NeuralTerpret (2017), which tackle AR tasks using logic solvers, as well as Recognizing Long Grammatical Sequences Using Recurrent Networks Augmented With an External Differentiable Stack (2021 PMLR) and Learning Hierarchical Structures with Differentiable Nondeterministic Stacks (2022 ICLR), which attempted AR solutions using neural networks. Additionally, we will add ref [24] of the main paper to the related work section.
***
**L2** We apologize for the inadequate explanation of the details of the benchmarks. In the final version, we will strengthen the explanation in the supplementary section and promise to publicly provide a comprehensive disclosure of the complete code we used and the precise experimental setups for each task.
***
*p.s. Due to space constraints, we'll promptly provide the full table for W3, Q1 during the interactive review on request*
---
Rebuttal Comment 1.1:
Title: Looking forward to your post-rebuttal feedback
Comment: Dear Reviewer iTug
Thank you again for the insightful comments and suggestions. Since the deadline for our discussion is approaching, we sincerely look forward to your follow-up response. We are happy to provide any additional clarification that you may need.
For your convenience, we provide a summary below:
* #### Strengthened experimental explanation and showed evidence for the strong performance of CLeAR across tasks of various difficulties (more details in **iPYe W1**).
* #### Repeated multiple experiments with different seeds show the stability of results and we also reported standard deviation.
* #### Motivation behind our CLeAR algorithm in the context of CL-AR, as well as the significant differences against LwF. This includes various additional comparative experiments.
* #### Numerous related works concerning AR tasks have been added to the paper.
We hope that the provided new experiments and the additional discussion have convinced you of the merits of this paper. Please do not hesitate to contact us if there are additional questions.
Meanwhile, we thank you for your very helpful comments. It would indeed make our paper clearer and stronger.
Thank you for your time and effort.
Best regards, Authors
---
Rebuttal 2:
Title: A kind reminder
Comment: ### Dear reviewer iTug
We wanted to kindly remind you that the interactive discussion phase will be ending in just a few hours. Unfortunately, we won't be able to engage in further discussions with you after the deadline. We hope that our response has addressed your concerns, and turned your assessment to a more positive side. Please let us know if there are any other things that we need to clarify.
We thank you so much for your helpful and insightful suggestion.
Best, Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers very much for their valuable comments and constructive suggestions to strengthen our work. Also for the positive comments and encouraging remarks: The paper addresses for the first time the continual learning (CL) adaptation of algorithmic reasoning(AR) tasks (iTug, iPYe). It also introduces a novel set of logical reasoning tasks for CL which are categorized according to the Chomsky hierarchy (FdFh, aCtv). These are disparate from the standard CL scenario (iTug) and research has not yet been conducted (GYfA). Also, it is challenging to use most previous CL methodologies on this new task (FdFh). The paper introduces a novel methodology, CLeAR (GYfA), and the core idea of mapping tasks and inputs to an embedding space from which a unified model learns continually to solve tasks is, to my knowledge, highly novel (iPYe), novel (FdFh), makes sense (aCtv). With several experiments, the proposed method showed desired effects, preventing catastrophic forgetting in several scenarios (iTug) and showing the capability of achieving backward transfer (FdFh). The paper bridges the gap between CL and real-world cognitive skills development (GyfA) and leads to a more general setting w.r.t. standard CL which can improve current research and lead to new methodologies (iTug).
Following reviewers’ suggestions, we have added literature reviews and performed more supporting experiments. Here, we would like to highlight the **main revisions**:
# Literature reviews
We cited and discussed the following papers
> **DeepProbLog(2018)[1], HOUDINI(2018)[2], NeuralTerpret(2017)[3]**: These papers propose a neuro-symbolic or probabilistic framework for solving logical problems. These frameworks are divided into two main parts. The first part consists of task-specific pre-defined parameterized functions capable of handling logic (e.g., ProbLog for DeepProbLog, symbolic program synthesizer for HOUNDINI, and program interpreter for NeuralTerpret). The second part involves Neural networks (NN). These frameworks support end-to-end gradient-based updates for the combination of the parameterized functions and the NN. The program part acts as a logic solver while NN acts as a feature extractor from the image. We included this paper In the "related work" section and described their relevance and differences compared to our research (by FdFh, GYfA).
> **On warm-starting neural network training[4], Does the Adam Optimizer Exacerbate Catastrophic Forgetting?[5]**: This paper discusses catastrophic forgetting, one of the core challenges in CL. It addresses the issue of rapid forgetting by preventing abrupt changes in model parameters. We have added this paper to the "introduction" and revised its relevant content (by aCtv).
> **A formal hierarchy of RNN architectures[6], Saturated transformers are constant-depth threshold circuits[7]**: These papers discuss the feasibility of initial AutoRegressive (AR) tasks on RNN and Transformer model architectures. We have added citations to these papers in the “related work” section (by aCtv).
# Experiments
> **Repeated experiments**: We updated single-run experiments to three repeated experiments by adding two more seeds. We confirmed that there were only minor variations in the values of the experimental results. In the final version, we will conduct many more repeated experiments, and this measure will further reduce confirmation bias (by iTug. refer **iTug W3** for more detail).
> **In-depth inspection of the model learning reasoning**: “What is learned in the model across disparate tasks, allowing knowledge transfer.”(FdFh), “How does the model coordinate incoming tasks?”(iPYe), “Is it really good to learn AR sequentially instead of learning all at once?”(aCtv) These are very important questions in CL, but it has been a challenging problem due to the black-box nature of the NN. However, we have taken a step forward in solving these problems with the help of the high interpretability of the AR task and automata theory. We conducted an analysis of **A. what the model actually learns from data and how they coordinate it with the new task** by analyzing the model’s hidden feature with the corresponding automata. And **B. what differences are achieved between CL and joint training.** In typical CL scenarios, joint training (all tasks learned together), is often considered an upper bound in terms of performance. However, contrary to the conventional belief, joint training often showed lower performance in the conducted AR tasks, which emphasizes the importance of continually learning AR tasks just like humans improve their logical abilities through step-by-step learning in the real world. We quantitatively compared the performance of joint training and CL and proposed a convincing hypothesis for why CL outperforms joint training in certain cases (by FdFh, iPYe. For the result of this experiment, refer **Fig R1,R2 of attached PDF** and **iPYe Q1** for more detail).
> **Experiments on more difficult tasks**: We extended our experiments to more model structures on the challenging ten sequential AR task scenarios from our paper, along with reversed order of the Chomsky hierarchy. Furthermore, we conducted additional experiments to investigate cases of tasks with partially overlapping columns (by FdFh. refer **FdFh W3,Q1** for more detail).
* The code for additional experiments will also be publicly released in the final version.
# References
[1] Manhaeve, et al. "Deepproblog: Neural probabilistic logic programming."
[2] Valkov, et al. "Houdini: Lifelong learning as program synthesis."
[3] Gaunt, et al. "Differentiable programs with neural libraries."
[4] Asht, et al. "On warm-starting neural network training."
[5] Ashley, et al. "Does the Adam Optimizer Exacerbate Catastrophic Forgetting?."
[6] Merrill, et al. "A formal hierarchy of RNN architectures."
[7] Merrill, et al. "Saturated transformers are constant-depth threshold circuits."
Pdf: /pdf/28c143d112bced2170d9156c717d9b77a01e7940.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper looks at the continual learning (CL) setting involving tasks which require more abstract reasoning (reusable across input domains), s.a. addition and multiplication. The authors imagine that the inputs of all encountered tasks can be mapped into a common space, which, in turn, can be similarly transformed across tasks, in order to achieve transfer on a higher level. To take advantage of this fact, the inputs are processed by three NNs: 1) task-specific m_t which maps the inputs to a common space; 2) f which processes the resulting latent embeddings; 3) h_t which is a task-specific single-layer projection head. m_t is trained using an additional decoder. f and h_t are trained using a similar approach to Learning without Forgetting (LwF). The paper introduces a new set of tasks for evaluating abstract reasoning in a CL setting. The results demonstrate that this approach is capable of backward transfer on the created tasks.
Strengths: The paper looks towards an interesting direction in CL. I found the discussion on the challenges around using most CL methods for abstract reasoning (AR) to be interesting.
The paper introduces a novel set of logical reasoning tasks which are categorised according to a class in the Chomsky hierarchy.
The experiments demonstrate the method’s capability of achieving backward transfer.
The use of the auto-encoder objective to learn a common mapping between tasks appears novel.
Weaknesses: The method claims to introduce the first CL scenario for abstract reasoning, but simpler AR tasks have been explored in NeuralTerpret, HOUDINI, DeepProbLog.
The description of the algorithm seems rushed. The approach is presented but not well motivated, e.g. the properties of the mapping m_t on lines 218-222. The continual learning approach’s novelty seems limited, being mainly modelled after LwF.
My impression is that the method is not evaluated on a difficult enough setting in order to determine whether it is promising. Each input consists of one-hot encodings (as opposed to images) which I imagine are easier to learn how to map to a common space.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: What do you think would happen if two tasks have partially overlapping input alphabets, e.g. (task1: 1 2 3 4 , task2: 3 4 5)? Would this idea of mapping both input distributions to the same space be problematic?
What would you say is learned in f_theta across disparate tasks, which allows for transfer between them?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The fixed-size model limits the effective number of tasks which can be solved.
It appears that the approach relies on some level of similarity between tasks, as it uses the new inputs to create pseudo-labels.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We thank you for the insightful comments and suggestions. We have addressed each of your questions below.*
## Weaknesses
**W1**
Thank you for introducing us to these inspiring works. We will add this research to the related work section (we reported a summary in global comment). While all of these studies are remarkable, we think there are significant differences between their works and our work in terms of scenarios and methods.
First, our work does not rely on any human (prior) knowledge about the tasks in its model construction. The model purely performs "Learning from data" without knowing incoming logical tasks. In contrast, their works require pre-configured programs that handle logic with human knowledge of future tasks. This contradicts the assumption of CL, as it is impossible to anticipate and program for all potential problems that may arise.
Second, our paper performs logical operations using a pure neural network. While their works do utilize neural network structures internally, they mainly use them as feature extractors, such as extracting features from MNIST digits. Also for the lifelong learning scenarios they presented (e.g., Summing two images to perform arithmetic operations on multiple images), the neural network acts as a feature extractor, while pre-defined programs perform the actual logicals.
Third, while our methodology allows CL, their works lack scalability to multiple tasks. They either modularize programs for each task or create parameterized programs from the beginning to accommodate multiple tasks.
* * *
**W2**
We apologize for the unclear explanation of algorithm. In the final version, we will revise the description of our CLeAR methodology, motivated by the unique characteristics of AR tasks.
CLeAR is a specialized model for AR tasks, serving a distinct purpose from LwF. Unlike images, where input distributions share common attributes like color, texture, and patterns, AR tasks present highly distinct input distributions unique to each task. Given the considerable dissimilarity between tasks and their varying dimensions, we introduce a novel approach CLeAR to effectively adapt to these environments.
The mapping function, first part of CLeAR, is designed to explicitly align different task inputs. We propose a novel approach that can align distributions without prior knowledge of each task adopting Auto-Encoder architecture.
Otherwise, LwF assumes image distributions are similar across tasks. While this imperfect assumption leads to outdated performance in the image domain, it fortunately benefited by our mapping function. However, due to the regularization term in LwF that accounts for image distribution shift, LwF does not fully utilize the aligned mapping, resulting in lower performance for ten complex tasks (W3). On the other hand, our CLeAR consistently exhibits high performance for AR tasks, thanks to its AR specialized label sharpening term.
* * *
**W3** We conducted additional experiments on more complex sequential tasks involving 10 different AR tasks (EP-CN-PC-CO-RS-SM-IP-OF-BS-DS: reported average).
1. Increasing hierarchy
> CLeAR final 60.22%, BWT -7.55%
LwF (our mapping) final 59.42%, BWT -12.07%
2. Decreasing hierarchy
> CLeAR final 63.32%, BWT -2.90%
LwF (our mapping) final 63.05%, BWT -8.49%
Our model demonstrated favorable performance and less forgetting in these difficult scenarios.
According to recent research (main[11]), even for tasks that may seem very simple to humans, like the parity-check task (whether the sum of binary is even or odd), transformer models completely fail to learn the algorithm. Therefore, We find it difficult to consider our task as a very easy task from the model's perspective.
## Questions
**Q1** Thank you for suggesting good experiments. We conducted experiments on tasks that have semantically identical partially overlapping input alphabets.
MA (modular arithmetic) task with fixed modulus 8, and for each task, input digit is (0,1,2,3), (2,3,4), (3,4,5), (4,5,6,7), (1,3,5,7), and (2,4,6,8) sequentially (reported average).
> CLeAR final 66.17%, BWT 0.62%
LwF (our mapping) final 60.98%, BWT -3.68%
BS (bucket sort) task, same set of numbers sequentially included in the input.
> CLeAR final 45.03%, BWT -1.85%
LwF (our mapping) final 45.95%, BWT -0.55%
CLeAR demonstrates favorable performance on MA-partial overlap achieving best Acc 99.81% with LSTM; however for BS-partial overlap, there are plenty of rooms to be improved.
***
**Q2** To address your comment, we performed additional experiments. We observed and analyzed, for the first time, how a model learns certain AR tasks internally during a CL process and coordinates knowledge. Additionally, through this rebuttal, we made a notable observation that CL on AR can often outperform joint training. Based on these, we proposed hypotheses to explain this phenomenon. Please refer to Fig.R1 and R2 in the attached PDF file and iPYe Q1 for details.
## Limitations
**L1** Your observation is valid, and indeed, increasing the model's capacity is a viable approach in the field of CL. There are foundational models like Progressive Neural Networks (2016) and Dynamically Expandable Networks (ICLR 2018). However, recent CL research has primarily been oriented toward using a fixed-sized neural network that is more challenging and resembles human learning processes.
***
**L2** We agree with your insight. In fact, methodologies in CL aim to identify similarities between tasks and leverage them to facilitate the learning of subsequent tasks. Traditional CL on images has shown high similarity because all tasks involved recognizing general features like colors or stripes, which require similar kernels.
However, in CL-AR, the distributions between such tasks vary significantly, making it challenging for the model to capture similarity as mentioned in W2.
*Due to text limit, we'll promptly provide the full table in W3, Q1 during the interactive review on request.*
---
Rebuttal Comment 1.1:
Title: Looking forward to your post-rebuttal feedback
Comment: Dear Reviewer FdFh
Thank you again for the insightful comments and suggestions. Since the deadline for our discussion is approaching, we sincerely look forward to your follow-up response. We are happy to provide any additional clarification that you may need.
For your convenience, we provide a summary below:
* #### Fundamental differences between the previous neuro-symbolic framework with NNs and our CLeAR, and critical reasons why only our approach is suited for CL.
* #### Detailed motivation behind our CLeAR algorithm highlighted the key distinctions against LwF and we conducted additional comparative experiments (more details in **iTug Q1**)
* #### Difficulty of AR tasks for NNs (even large transformers fail at simple (*to human*) PC task learning).
* #### Additional experiments of lengthy and challenging **10-consecutive AR tasks** showing CLeAR exhibits minimal forgetting.
* #### Additional experiments of **tasks of columns partially overlap** showing CLeAR could remember almost all tasks perfectly (99.8%).
* #### Newly devise a novel approach based on CL-AR to analyze how the model learns and transfers tasks in CL (details in the attached **PDF & iPYe Q1**)
* #### Discussions of the current trends in CL research where model size is either fixed or increased and methods utilizing the similarity between tasks for knowledge transfer.
We hope that the provided new experiments and the additional discussion have convinced you of the merits of this paper. Please do not hesitate to contact us if there are additional questions.
Meanwhile, we thank you for your very helpful comments. It would indeed make our paper clearer and stronger.
Thank you for your time and effort.
Best regards, Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for your reply.
W2: Your reply does not alleviate my concern regarding the novelty of the approach. As f and h_t are trained using an objective similar to LwF, the novelty in the approach appears to be in: a) having a mapping m_t which maps to a common space, as well as in b) how m_t is trained. The idea of a) is already found in papers doing Domain Adaptation, leaving me to conclude that the novelty of the approach hinges on b). Could you explain what makes the way you train m_t novel? If this is the main contribution, shouldn't it be described more clearly in the paper (instead of being left to the Appendix)?
W3: Thank you for providing the additional results. Could you clarify what these 10 AR tasks are? What are their inputs, what are the labels? My original concern was that it was perhaps not really challenging for the inputs used originally to be mapped to the same common space.
Q1: Thank you for providing these extra results. Conceptually, the requirement that all mapping needs to span all of the common space, appears limiting in experiments with partial overlap. Moreover, what if the 2 tasks are too distinct and there's no overlap, thus mapping their inputs to the same common space and processing these with the same f might reduce the performance. In contrast, CL methods used for image processing of different domains do not have a penalty which forces the latent spaces between different domains to overlap. Would you agree that there's an inherent limitation as to the variety of the input domains functions which can be used?
---
Reply to Comment 1.2.1:
Title: Response to W2
Comment: Thank you very much for your kind reply. We have provided answers to your three questions sequentially.
## Main answer W2
* **Regarding a)**, we believe that the alignment method in domain adaptation differs both in its purpose and method from our mapping. The concept of domain variant, or task irrelevant information, does not apply to the CL-AR task. As the domain of images transitions from natural to cartoon, domain invariant (task relevant) information remains preserved, compensating for the altered domain variant features through alignment.
* In contrast, in the AR task, all inputs are task relevant, and even a minor change in a single input pixel leads to a change in the output class. In essence, our mapping function represents a unique **one-to-many transformation**, which ensures a **uniform distribution** and a **function range equal to the codomain**, rather than preserving feature distribution. Furthermore, mapping retains every piece of information allowing a complete reconstruction of the original raw input through reverse mapping.
***
* **Regarding b)**, the novelty of m_t stems from its pioneering capacity to enable CL for a sequence of arbitrary AR tasks. Furthermore, it has demonstrated comparable results, and in some cases even superior results, in a single training setting contrasted with using raw input, proving m_t does not hinder overall model’s performance.
* The **novelty of the m_t learning method** lies in incorporating a perfect reconstruction loss to preserve the information from raw data and utilizing mean and covariance losses to transform each task into a discrete uniform distribution, ensuring that information is not lost in the process and allowing **one-to-many mapping**. This allowed CLeAR to perform well in adaptation to the CL-AR task, which was introduced for the first time. We shall describe this method more clearly in the final version.
***
* **Our main contribution** is twofold: Firstly, we introduce a Continual Learning scenario for AR tasks, where neural networks learn logic continuously—a feat that was previously unattainable within neuro-symbolic frameworks. Secondly this pioneering work includes the proposition of the first baseline model, CLeAR, which makes this continuous learning of logic not only feasible but also performs high accuracy and minimal forgetting in various scenarios.
* The CL-AR task possesses distinct data characteristics that pose challenges to applying existing methodologies used in CL for images or NLP. Furthermore, there is a scarcity of research on NN's AR tasks themselves. The newly proposed experiment of inspecting sequential acquisition of automata within its structure, marks a significant advancement. Consequently, this paper is poised to serve as a starting point for the novel field of CL-AR, providing valuable insights for future exploration.
***
## Appendix
We have explored various methodologies in **domain adaptation**:
In the field of Statistic Divergence Alignment, the focus is on minimizing domain discrepancy in the latent feature space. Methods such as MMD ([1] 2018 TPAMI) and its variants ([2] 2017 PMLR), CORAL estimation using inter-domain covariance ([3] 2020 AAAI), and methods utilizing Wasserstein distance ([4] 2020 AAAI; [5] 2021 ICASSP) measure the distance between features in each domain. These methods align these features to retain only the invariant feature. Approaches such as [6] (2020 arXiv) and OSUDA ([7] 2021, MICCAI) use batch statistics for distribution alignment.
Field of generative domain mapping, similar to ours, change data to the target domain at the input data level but employ completely distinct methods, such as the style GAN ([8] 2020 TMI, [9] 2022 SPIE).
Lastly, for robust representation learning, there are methodologies of pre-text tasks or contrastive learning ([10] 2019 IEEE Access, [11] 2020 arXiv). However, these self-supervised training methods are not feasible for AR tasks.
[1] Rozantsev, et al. Beyond sharing weights for deep domain adaptation
[2] Long, et al. Deep transfer learning with joint adaptation networks
[3] Chen, et al. Homm: Higher-order moment matching for unsupervised domain adaptation
[4] Liu, et al. Importance-aware semantic segmentation in self-driving with discrete wasserstein training
[5] Ge, et al. Embedding semantic hierarchy in discrete optimal transport for risk minimization
[6] Zhang, et al. Generalizable semantic segmentation via model-agnostic learning and target-specific normalization
[7] Liu, et al. Adapting off-the-shelf source segmenter for target medical image segmentation
[8] Yang, et al. Unsupervised MR-to-CT synthesis using structure-constrained CycleGAN
[9] Liu, et al. Structure-aware unsupervised tagged-to-cine MRI synthesis with self disentanglement
[10] Xu, et al. Self-supervised domain adaptation for computer vision tasks
[11] Kim, et al. Cross-domain self-supervised learning for domain adaptation with few source labels
---
Reply to Comment 1.2.2:
Title: Response to W3
Comment: ## Main answer W3
* **CL on AR tasks with diverse column numbers was immensely challenging.** We attempted techniques like padding, MLP & attention based embedding, and pre-trained language model-based embeddings (sota in tabular transfer learning), yet all coupled with conventional CL methods resulted in complete failure.
* However, our newly devised mapping function, m_t, has led to the creation of a remarkably high-performing and stable model for CL-AR, despite its **lightweight structure and intuitive training approach**. Unlike existing input space matching methods that struggled, our m_t approach has demonstrated superior performance and stability.
* We haven't pushed the limits of our mapping function's performance by increasing the number of alphabets extensively. This decision was made to ensure a **fair comparison with recent papers ([1] 2022 ICLR, [2] 2022 ICLR)** that experimented with a small number of alphabets for single AR task performance evaluation. Also the main goal was to determine whether CLeAR can handle a multiple sequence of consecutive AR tasks. We have shown that single-task performance remains comparable to these papers with our mapping function , validating that our mapping does not hinder performance.
* As you've suggested, **expanding the alphabet's range and testing the mapping function's performance** to its limits for both single tasks and CL could be a valuable future direction. It would help determine if models thought to handle specific AR tasks are influenced by the alphabet's count. Additionally, it will offer the opportunity to ascertain whether models traditionally believed capable of handling specific AR tasks remain unaffected by the alphabet's count.
***
* **The most challenging task for the mapping function**, among our experiments, was the high-correlation CL-AR scenario on modular arithmetic (**mod 8**).
In the MA task, the input alphabet consists of **eight digits 0 to 7** and __three operators +, -,*.__ The output corresponds to numbers ranging from 0 to 7, obtained through arithmetic operations. In this setup, we **changed the modulus of this task from 2 to 8**, creating a sequence of seven tasks. For instance, in the input sequence, digits would be arranged as 0,1 / 0,1,2 / … / 0-7 for each task. Each task's mapping function learned to handle the mapping of these varying numbers of alphabets.
Despite this complexity, CLeAR demonstrated excellent performance in this scenario as well (refer to the main paper Table 2: achieving the best accuracy of 91.6% with LSTM).
***
## Appendix
10 AR tasks are as follows: **EP-CN-PC-CO-RS-SM-IP-OF-BS-DS**
* **Regular language** (Finite automata) should learn “internal state”
1. **EP; Even paires**: input **0,1**(binary) of arbitrary length(1~100) / output: 0,1
Check if the count of 01 and 10 is the same among the pairs of two consecutive numbers
2. **CN; Cyclic navigation**: input **move left, stay, move right** of arbitrary length(1~100) / output: final cycle position (0,1,2,3,4)
The position after executing a command on a cycle with 5 positions.
3. **PC; Parity Check**: input **0,1** (binary) of arbitrary length(1~100)/ output: 0,1
whether the sum of the sequence is odd or even
* **Context free language** (Push down automata) should learn “stack memory”
4. **CO; Compare Occurrence**: input **0,1,2,3** of arbitrary length(1~100)/ output: 0,1,2,3
Return the digit that appears most frequently in the string.
5. **RS; Reverse String**: input **0,1** (binary) of arbitrary length(1~100)/ output: reverse sequence (input length)
6. **SM; Stack Manipulation**: input: **digit 0, digit 1, action pop, action push 0, action push 1**/ output: string with 0,1
return string after following instruction
* **Context sensitive language** (Linear bounded automata) should learn “tape memory”
7. **IP; Interlocked Pairing**: input **n*0s m*1s** (total length 1~100) / output n*0s (n+m)*1s m*0s (total length 2*input length)
8. **OF; Odds First**: input **0,1** (binary) of arbitrary length(1~100)/ output: reorder string to digits in odd index comes first (input length)
9. **BS; Bucket Sort**: input **0,1,2,3** of arbitrary length(1~100)/ output: input string sorted in ascending order (input length)
10. **DS; Duplicate String**: input **0,1** (binary) of arbitrary length(1~100)/ output: input string repeated twice (2*input length)
*Please refer Appendix C.2 for more detail*
[1] Delétang, et al. Neural networks and the chomsky hierarchy
[2] Liu, et al. Transformers learn shortcuts to automata
---
Reply to Comment 1.2.3:
Title: Response to Q1
Comment: ## Main answer Q1
* Our experiments highlighted that our model **exhibited minimal forgetting and knowledge transfer** with **partially overlapping columns performing the same role within the same task** (this additional experiment), **overlapping columns with the same alphabet in similar tasks** (high-correlation CL-AR scenario), **tasks with distinct alphabets and no overlap** (in-hierarchy CL-AR scenario), and even when dealing with **tasks requiring different automata hierarchies** (inter-hierarchy CL-AR scenario). Furthermore, additional experimentation (iPYe Q1) demonstrated that this strong performance might stem from the internal learning of automata and aligning shareable states across tasks.
* Our mapping also differs in purpose and usage from the feature alignment typically used in **conventional domain CL methods**, and it is applicable to most AR tasks expressed as formal languages.
Traditional CL methods used for image processing across different domains have employed techniques such as introducing domain adaptation or co-training from diverse domains. For instance, in [1] (2021 AAAI), the model learns simultaneously from images of distinct domains and updates weights in the direction of gradient vectors with positive inner products. [2] (2022 CVPR) employs a Mahalanobis similarity matrix to align different domains while preserving the manifold by expanding or shrinking each axis of feature. [3] (2022 CVPR) trains a student model using the weights of a pretrained model and the moving average values. In [4] (2023 CVPR), few specific weights are updated for new domain adaptation.
* These methods all rely on the fact that, even though the domains differ, there exists a strong similarity among images of the same class. For instance, even if a natural image of a car and a cartoon image of a car belong to different domains, they share characteristic shapes and textures. Therefore, feature alignment in domain continual learning penalizes the feature manifold slightly to align features across domains, as exemplified by [2] (2022 CVPR).
* **In contrast, our mapping space is distinct from the aligned feature space.** In AR tasks, the input distribution consists largely of white noise rather than a thin manifold of meaningful images. While domain adaptation aims to preserve and align the feature manifold, our mapping function's objective is to transform all input distributions into the same distribution. As a result, both the objective and the approach differ. Thus, considering CL-AR tasks as domain adaptation is inappropriate in my view. Furthermore, for AR tasks, our mapping can effectively transform various formal languages into a uniform distribution.
***
## Appendix
[1] Tang, et al. Gradient regularized contrastive learning for continual domain adaptation
[2] Simon, et al. On generalizing beyond domains in cross-domain continual learning
[3] Wang, et al. Continual test-time domain adaptation
[4] Prasanna, et al. Continual Domain Adaptation through Pruning-aided Domain-specific Weight Modulation
***
We once again appreciate your insightful comments.
---
Reply to Comment 1.2.4:
Title: A kind reminder
Comment: ### Dear reviewer FdFh
We wanted to kindly remind you that the interactive discussion phase will be ending in just a few hours. Unfortunately, we won't be able to engage in further discussions with you after the deadline. We hope that our response has addressed your concerns, and turned your assessment to a more positive side. Please let us know if there are any other things that we need to clarify.
We thank you so much for your helpful and insightful suggestion.
Best, Authors | null | null | null | null | null | null |
CWCL: Cross-Modal Transfer with Continuously Weighted Contrastive Loss | Accept (poster) | Summary: The paper proposes a novel continuously weighted contrastive loss function for multi-modal models. The proposed method takes consideration of 1) similarity of samples in the same batch 2) the non-binary nature of similarity between input samples, and therefore utilizes a continuous weight matrix in learning alignment between modalities. In general I think that the ideas on this paper are novel. The explanation of the methods is explicit and concrete. I think it is enough to be reproducible. However the paper may benefit from some revision regard to how the work is presented and focus of experiments.
Strengths: * The proposed idea is novel in resolving two crucial problems in standard contrastive loss.
* The authors considered modalities of both text-image and speech-image and presented some supportive zero-shot evaluation results.
Weaknesses: * The experimental results are not complete, it would be good to see a more complete set of results on how this method would perform on the downstream retrieval tasks (T4 compares between CL and CWCL, without indicating performance compared to baseline models such as LiT or CLIP).
* The experiments regarding the transfer from text to speech are nice however the results in Section 4.2.1 is not clearly presented and is a bit hard to follow.
* Figure1 is not really informative and maybe can be combined with Figure 4 so that more interesting and qualitative results can be presented.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: * Is it possible for you to provide results for retrieval compared to the baseline models? This would address a major weakness in the evaluation.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: * There is no explicit limitations section in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Results are incomplete...**
Thank you for raising this point. In Table 4, the row corresponding to CL is indeed the LiT model. We named it CL to indicate that this model was tuned using standard contrastive loss as done in the LiT paper. In fact, the model referred to in this row follows the same training procedure as LiT, with the image encoder initialized using a pre-trained model and frozen during training. We will make this change and rename the row to LiT to clarify this.
Further, the results reported in Table 4 correspond to using the ViT-L/16 architecture for the image encoder and a 12-layer transformer model for the text encoder; trained using the YFCC15M and the CC12M datasets. However, we noticed that the LiT model achieves the best performance (using a publicly available dataset) when the BERT-large model is used as the text encoder.
Thus, we performed further experiments with the BERT-large model as the text encoder. We again observe an improvement in the retrieval performance. We provide the results in Table 12. Further, we also observe performance gains in the zero-shot image classification task as well. We provide the results in Table 13.
In the paper, we did not compare CWCL against CLIP on the retrieval task, since we expect CLIP to have lower performance than LiT (and CWCL already performs better than LiT). However, we realize that a direct comparison between CWCL and CLIP might be of interest. To this end, we provide a direct comparison of the two methods using zero-shot retrieval performance on the MS-COCO dataset. Due to the limited time frame, we trained new CWCL and CLIP models on a subset of the training data that consists of samples from/ only CC12M. Further, we trained both models for only 20 epochs. Although this setup results in low absolute performance on both models, we can still obtain a direct relative comparison. We outline the results in Table 14. Interestingly, the models trained on the CC12M dataset (Table 14) perform better at retrieval than models trained on both YFCC15M and CC12M (Table 4). We have noticed such behavior in open-source CLIP models (\url{https://github.com/mlfoundations/open_clip}) as well. We hypothesize that this is because CC12M is a dataset that has been subject to filtering out bad samples, as opposed to YFCC15M which is not.
We hope that we have addressed the reviewer's concerns about the results of the retrieval task performance of CWCL based approach. We are happy to answer any further questions.
**W2: Exp Text to Speech...**
Thank you for this comment. We reviewed and edited the entire section 4.2 carefully considering readers could be less familiar than us with the contents we wrote. We were also short on space and hence combined all results into a single table (Table 2 in the original manuscript).
In the revised paper, we will improve this section for readability.
**W3: Figure 1...**
Thank you for this suggestion. We will combine Figures 1 and 4.
**Q1: Is it...**
We have provided the results of retrieval tasks in our response to one of your earlier comments. Further, we also provide results based on new experiments that studied the ViT-L-16+BERT-large configuration. We appreciate the reviewer's feedback regarding our experiments and we hope we have addressed their concern.
**Summary**
Overall, we appreciate your questions and suggestions. They have helped us improve the paper. We agree that including performance on retrieval tasks adds to the completeness of our results. We will include these in the main paper manuscript. We will also clarify the results in Section 4.2.1. We were a little pressed for space in the original manuscript but we will better manage the space in the revised version by moving Figure 1. This will help us clarify Section 4.2.1 further.
We hope that we have addressed your concerns about our paper and sincerely hope to discuss further with you during the discussion phase.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed response. You have addressed my concerns so I will increase my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their prompt response and consideration! | Summary: The authors propose a composite loss in contrastive learning in order to preserve structure between two embedding spaces for different modalities. In the setting in which they study, two unimodal encoders are aligned by freezing one of them and computing a "continuous" contrastive weight between samples $(i, j)$ as $0.5 + \frac{q_i^T q_j}{2}$. With these weights an additional loss for structure preservation is used in addition to the standard self-supervised loss for the unfrozen encoder. The proposed approach outperforms standard approaches such as LiT, SimCLR on 0 shot classification tasks.
Strengths: - The method is simple and efficient requiring a small $\mathcal(O)(k^2)$ computation for pairwise weights per batch
- Performance improvements are strong across the experimentation
Weaknesses: - There is a lot of setup while the entirety of novel contributions seem to be buried in section 2.3.
- This paper makes more sense for a workshop, the contributions are limited and is a small modification of previous works such as (https://arxiv.org/pdf/1905.12837.pdf, https://arxiv.org/pdf/1811.01459.pdf, https://arxiv.org/pdf/2103.14003.pdf)
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Which continuous pairwise weights did the authors try? Given this is the main contribution it would be helpful to understand performance as this weighting changes in the 0.5 bias used or for nonlinear weighting wrt to the inner products.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: There is a...**
Please note the problem setup is only done in Section 2.1. We request the reviewer to elaborate further on what they mean by "lot of setup".
Further, our contributions are multi-fold and are not confined to Section 2.3. The novel CWCL loss function is introduced in Section 2.2, while Section 2.3 discusses how we calculate the similarity weights.
Additionally, we demonstrate the effectiveness of the proposed methods using many experiments, models, datasets, and modalities. For example, we show the "template robustness" property of the proposed methods and we provide qualitative results showing that the proposed loss function leads to better alignment between the modalities.
We believe that this is the first work to study 0-shot speech-intent classification where no task-specific speech data was used for training. Further, our results show a strong improvement over methods that use standard contrastive loss. We would consider these results to be novel.
In general, we request the reviewer to elaborate on why they feel that these other aspects are not novel. We believe that in a field like machine learning, an idea followed by extensive experimental results that demonstrate the effectiveness of the idea together contribute to novelty.
**W2: This paper...**
*The General Pair-based Weighting Loss for Deep Metric Learning (\url{https://arxiv.org/pdf/1905.12837.pdf})*:
The referred paper is very different from ours for the following reasons:
Firstly, it studies the problem of metric learning in a supervised setting, where class information is available in the dataset. This is mentioned explicitly in Section 2.A of the paper where they formulate the problem and in Algorithm 1(authors mine “positive” and “negative” examples using Equations (16, 17) by using class labels).
Secondly, they do not consider multi-modality. The algorithms proposed in this paper are geared towards a single modality. Hence, the resulting models cannot be used for other downstream tasks in a zero-shot way. On the contrary, they can only be used for the specific task that they are trained on.
In our paper, we have already acknowledged a weighted version of contrastive loss has been developed in the supervised learning case by Khosla et. al., (NeurIPS 2020, ref. [14] in our paper). However, extending weighted contrastive loss to self-supervised learning has not been studied before. Further, models such as CLIP and LiT have proven that the scale of data and models have a large impact on downstream performance. But they only use the standard contrastive loss because developing a weighted contrastive loss in the context of the work was not explored before. Thus, our work is not a simple modification of the above paper.
*Deep Metric Learning by Online Soft Mining and Class-Aware Attention\url{https://arxiv.org/pdf/1811.01459.pdf}*:
This also addresses the problem of metric learning in a supervised setting. This is highlighted in the main methodology section, where they consider a uni-modal dataset of the form ${x_i, y_i}$ where $y_i$ is the class label. This class label is used to define the positive set $P$ (samples with the same class label) and negative set $N$ (samples with different class labels), used to compute the losses in eqns. (7, 8, 9).
*Rethinking Deep Contrastive Learning with Embedding Memory\url{https://arxiv.org/pdf/2103.14003.pdf}*:
This also addresses the supervised learning case; as we quote the introduction section of the above paper: “In this work, we simply adopt MoCo in a supervised manner for DML, which is referred to as s-MoCo. It provides a strong example for us to study the pair weighting in memory-based DML”.
**Q3: Which..**
Please refer to the answer we provided to Reviewer 2's questions on the different similarity functions that we considered.
**Overall comments on comparison with the above three papers:**
We understand the reviewer’s concern that these papers study variations of weighted contrastive loss. However, extending weighted contrastive loss from a supervised setting to a self-supervised setting is non-trivial. To summarize the difficulty, all three papers pointed out consider the following question: Using prior information about positive and negative pairs, how to weigh positive pairs that are further apart to bring them closer, and weigh negative pairs that are closer so that they are pushed apart? This is captured in Figure 2. of the third paper pointed out by the reviewer (https://arxiv.org/pdf/2103.14003.pdf). Thus, an optimal solution here is where all positive samples are brought as close as possible, and all negative samples are pushed away as far as possible. On the contrary, in our work, there is no notion of strictly positive or negative. This is a core contribution of the paper, a softer notion of similarity helps us recreate the embedding space in one modality based on a pre-trained model from another modality.
The difference between bringing together all positive samples and repelling all negative samples (with weighted or unweighted losses) and that of considering similarity as a continuous metric and learning a continuous structure in the embedding space of a modality may be subtle, but the resulting algorithms and models have highly different capabilities and applications. Using supervised labels to construct positive and negative sets makes the models specific to those class labels; features that distinguish two inputs within a class are filtered out and features that distinguish two inputs from different classes are accentuated. However, since our models do not make use of class-specific labels, they retain all features and are capable of zero-shot classification on many downstream tasks.
We hope we have addressed the reviewer’s concerns regarding the relationship between weighted contrastive loss in a supervised setting and our work and invite them for more discussion. | Summary: In this work, the authors consider the problem of cross-modal zero-shot transfer and propose a new loss function called continuously weighted contrastive learning (CWCL) that extracts better supervision from pretrained models in a single modality and leads to better alignment between two modalities. They run experiments on two modality pairs, image-text and speech-text, and CWCL yields significant improvements compared to standard contrastive learning (with up to 20-30% absolute improvement on a speech-to-intent classification task).
Strengths: - The authors present a simple and effective contrastive learning variant that makes effective use of pretrained models in individual modalities.
- They achieve a significant performance boost in zero-shot image classification when compared to existing contrastive learning objectives.
- They also demonstrate superior results on a zero-shot speech-to-intent classification task.
Weaknesses: The main contribution of this work (CWCL) can be grounded better in the context of existing work. This work highlights an important shortcoming in existing formulations of contrastive learning, namely that similarity has a binary interpretation. All positive examples and all negative examples are treated equally. This has been identified in prior work, albeit for a single modality, which the authors fail to cite. E.g., "Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification", V. Suresh and D. Ong, EMNLP 2021. There are also works on multimodal learning that highlight the importance of using a weighted contrastive loss to downweight faulty positives/negatives such as "Robust Audio-Visual Instance Discrimination", Morgado et al., CVPR 2021.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Did the authors experiment with other weighting functions for w_ij's in Equation 3? Some insights on how w_ij = q_i^T q_j/2 + 0.5 was arrived at would be useful for the reader.
- As in Figure 5, does CWCL exhibit robustness to templates for the speech-to-intent classification task as well?
- From Table 1, CWCL outperforms the supervised Resnet50 baseline on ImageNet-V2 as well as ImageNet-R, ImageNet-A and ObjNet by substantially larger margins. Is Resnet50 the appropriate supervised baseline to show for these datasets?
- From Table 2:
- RoBERTa+S performs consistently better than BART+Y on SLURP and STOP, but the trend flips for KWS. Can the authors comment on why that is?
- CWCL with RoBERTa+S appears to outperform the text-intent upper bound for STOP (87.87 vs. 84.78). Please explain this result.
- Just a comment: If the authors are looking to make some space in the draft, Figure 1 can be omitted without loss of clarity.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations have not been explicitly listed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 : The main...**
Both references are very interesting and relevant and we will cite them. We provide a detailed comparison below.
**Comparison with "Not all Negatives are Equal:.."**
The ideas explored in this paper are similar in spirit to those considered in our paper. However, this paper considers supervised learning. In our work, we consider fully self-supervised learning. This lets us use our models on many downstream tasks in a zero-shot way, whereas in the supervised setting, the resulting models are applicable only to the specific task that they are trained for.
**Comparison with "Robust Audio-Visual Instance Discrimination" (soft-XID)}:**
It is our belief that when models are trained initially using standard contrastive loss and then trained further by using weighted contrastive loss, biases from the initial training stage get further reinforced later on. In the initial stage of training, the encoders are trained to repel all samples except only samples belonging to the same pair. When the same encoders are used to obtain the similarity scores in the second stage, these similarity scores are unreliable and will further reinforce the biases from the first stage.
We performed an experiment that similar in spirit to the above paper. We trained a pair of image and text encoders using the standard contrastive loss using the CC12M dataset for 20 epochs. Note that this is similar to the setup of the CLIP model. After this, we fine-tuned the model using weighted contrastive loss for another 20 epochs. In the second stage, we obtain the similarity scores using the encoders trained in the first stage, as proposed in "Robust ..." paper. We then measure the zero-shot image classification accuracy on ImageNet. We also train a model using the proposed CWCL method for a total of 40 epochs. We provide the results below in Table 11. We further note that in the first experiment (emulating soft-XID method) the 0-shot accuracy decreased upon fine-tuning, further indicating a conflict between the similarity scores generated using the models trained using standard contrastive loss.
Overall, we find that the method of first using standard contrastive loss and then using a weighted contrastive loss yields worse performance as compared to our proposed method.
**Q1 Did...**
We first considered a softmax-based weighting function defined as over the intra-modal similarities and secondly our proposed function. The second formulation performs slightly better than the first one. Further, it results in a nicer interpretation of the proposed CWCL loss function as an interpolation between standard contrastive loss and the supervised contrastive loss function, as mentioned in lines 130-134 in the main paper. Further, note that in the second formulation, embeddings with $\langle q_i, q_j\rangle = -1$ result in a weight of 0, thus not resulting in any "attractive force" between such samples.
**Q2: As in...** Yes, we provide thein Table 10 of the rebuttal PDF.
**Q3: From Table 1...**
One of the advantages of models such as CLIP, LiT, and CWCL is that the same model can be used on multiple downstream tasks and datasets in a zero-shot way. To perform a more direct comparison to this regime, we choose a supervised and fine-tuned model that achieves a comparable performance on ImageNet and we report the same model's (after fine-tuning) accuracies on the other datasets. We wanted to better demonstrate how using a single model for multiple tasks performs and compare it with the CWCL-based model. Further, we also wish to note that the authors in Zhai et.al.,(LiT) make the same comparison.
**Q4: From Table 2 RoBERTA+S...**
The pre-trained text model used in RoBERTa+S was trained on the data including SLURP intent classification data (but only the text data. The SLURP speech data was not used in any stage of the training. Further, no data from STOP was used in the pre-training stage). We think that this affected the model to overfit on the task of intent classification in general, outperforming BART+Y (based on performances on SLURP and STOP) while performing worse in keyword spotting (KWS) than BART+Y.
**Q5: From Table 2 CWCL...**
We would like to clarify that in using the term "upper bound", we meant to convey that the text models' performance on intent classification serves as a "reference" for what can be expected from the speech encoder after training. We elaborate further below.
In our method, we "freeze" the text encoder and train the speech encoder using a paired (speech-text) dataset. In particular, we use the commonVoice dataset. Note that the text model does not get updated and hence does not get a chance to process this dataset. Further, the speech encoder was initialized using a pre-trained ASR model's encoder. A third difference between the text model and the speech mode is that the speech tower might have learned how to utilize acoustic cues in addition to linguistic information from a given speech utterance, to align its embedding to the semantic embedding from the text tower. These three differences could contribute to the speech model sometimes performing better than the text model. However, we still expect the text model's performance to be a good reference for what the speech model can achieve after training.
Another factor that might have led to this is that the number of classes in the STOP dataset is low (8), as opposed to the SLURP dataset (which has 60 classes). Thus, in the case of SLURP, the performance on the speech-intent task is lower than the text-intent task, since it is more challenging.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks to the authors for their detailed response and clarifications to my specific questions. I'm raising my score from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for prompt response and consideration! We appreciate your comments and feedback on our paper. | Summary: They propose a simple but effective method to align the representation space of two self-supervised models using pairs of examples from two modalities. They propose a CWCL loss where they reweight the contrastive loss of example pairs based on the similarity measured in one modality (equation 2). Specifically, let $(v_1, u_1)$ and $(v_2, u_2)$ be two example pairs, the CWCL loss encourage the representation of $v_1$ to be close to $v_2$ if $u_1$ is close to $u_2$. They experiment with a task requiring text-image and a task requiring text-audio alignment in zero-shot settings. The results show that adding CWCL loss to the original contrastive loss outperforms previous methods greatly.
Strengths: 1. The proposed approach is very simple, but the results show that it brings significant improvements. Especially for the image classification task, their model outperforms the supervised-trained model in the zero-shot setting.
2. They also show that, compared to a previous method, LIT, their model is more robust to the choice of prompt templates used to generate the text representation for the zero-shot image-classification experiment.
3. The experiment of audio-text alignment also shows that adding the CWCL loss improves the performance.
Weaknesses: 1. The ablation study comparing pure contrastive loss vs contrastive loss plus CWCL is only conducted for the audio-text alignment task.
2. In section 4.2.1, the author selected the text model very carefully. It thus seems that this method is very sensitive to the choice of the text model.
3. It may provide more insight if there are some qualitative analyses between the model trained without CWCL and the model trained with CWCL.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. The performance of LiT reported in Table 1 is much lower than the performance reported in the LiT paper. Could you explain what might be the reason for this discrepancy?
2. For the zero-shot keyword spotting task, because this task seems to be very much at the lexical level instead of the semantic level, I wonder what would be the benefit of using a contextualized pretrained language model?
3. Instead of having CWCL loss in addition to the original contrastive loss, what about having CWCL loss only but you set the weight of negative pairs to be 0? There seem to be some subtle design choices made for CWCL. I would like to know more.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: I didn't find the authors address the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1 The ablation study...**
We provide the comparison between standard contrastive loss and CWCL for the image-text modality in Table. 1. Please note that we have chosen to name the standard contrastive loss-based training as LiT, since we follow their training protocol by first initializing with a pre-trained image model and then using the standard contrastive loss as the training objective.
The same comparison for the speech-text modality pair is provided in Table. 2. For standard contrastive loss, we initialize both speech and text encoders with pre-trained models, freeze them, and train the speech encoder only. In this regard, it is similar to LiT. However, we name this method as CL and not LiT, since LiT does not consider this pair of modalities, as its name suggests **L**ocked-**I**mage **T**uning, and we wanted to avoid any misrepresentation of the LiT paper.
Hence, the ablation studies presented in Table 1 and 2 are equivalent and consider the image-text and speech-text modalities respectively. We will clarify this further in the revised paper.
**W2 In section 4.2...**
We agree that the text model choice could affect the final model performance in the downstream tasks. However, when selecting the two text models used in this paper, we simply selected 2 models that performed the best in the text intent classification on SLURP, among 50 publicly available text models (the details are written in 4.2's model architecture section). Please note the CWCL performance can be expected to be better than CL performance irrespective of the choice of the text model. We chose those text models to demonstrate good performance on downstream tasks. All that said, how to choose the right models for the training in detail is very much worth exploring in a separate work and we appreciate this comment.
**W3: It may...**
In Figures 3, 7, and 8 in the original manuscript, we provide examples of the alignment between the audio and the text embeddings generated using the SLURP speech-to-intent classification dataset. Speech (audio) data in this dataset was never exposed to the model at any stage of training. In this figure, we can observe that using CWCL leads to a strong alignment between speech samples and text samples that have similar semantic meanings. For example, all the speech samples related to queries about the news have a high alignment with the text samples that have similar meanings. (We would like to emphasize that this alignment is observed in a downstream dataset whose speech data was not used during training.) However, we do not observe such an alignment in the models trained with only standard contrastive loss (CL), as seen in the figures. Hence, the speech embedding model trained with CWCL can be considered to have a better language understanding than one trained with CL.
A second set of qualitative results are provided in Section 4.1.2, where we study the robustness to templates in the image-text model. Owing to a stronger alignment between images and text, models trained using CWCL are able to generalize better under variations in the template sentences used.
**Q1: The perf...**
This is a good question! It is because of the architecture we used while training the models. In our experiments, we used the ViT-L/16 architecture for the image encoder and a 12-layer transformer for the text encoder. However, LiT achieves its best performance when the BERT-large architecture is used for the text encoder.
In order to address this, we have now trained models using the ViT-L/16 + BERT-large configuration. We followed the training procedure described in the LiT paper as closely as possible. We provide results with the updated model architecture in Table 13 (the PDF of the global rebuttal). We are able to achieve much higher performance with the new architecture (LiT: 71.2%, CWCL: 76.48%).
**Q2: For the...**
We agree that the task of keyword spotting is at a lexical level than a semantic level. However, our goal is to train a versatile model that can be used for multiple tasks, e.g., speech-to-intent classification and keyword spotting.
While using our model for keyword spotting, we convert the keywords to sentences and also convert the keyword classification task to the alignment task. This is similar to 0-shot image classification done by CLIP and LiT-like models, which is also a task that does not require contextual language information. Further, the contextualized pre-trained language model could learn richer information in words considering the context in which each word is used. So our model performs well even when the keywords are used as is (without converting them to sentences).
Although we followed LiT's training protocol as closely as possible, the LiT model we trained achieves a zero-shot accuracy of 71.2\% on ImageNet (as opposed to 75.7\% mentioned in the LiT paper). We hypothesize that this difference is due to some training hyperparameters not fully specified in the LiT paper (such as the details of the Adafactor optimizer) and some possible differences in the training dataset (since some of the URLs in the CC12 and YFCC datasets may have become inactive, hence affecting our download of the dataset). However, under the same training setup, the CWCL model achieves 76.48\%. In general, we again find that the CWCL-based models achieve much better performance.
**Q3: Instead...**
This is an interesting suggestion. When the weights for the negative pairs are set to 0, the resulting loss function is close to the standard loss function, except that the weights for the positive pairs can still be tuned. This might help with dealing with "faulty positives". We thank the reviewer for their suggestion.
In the current paper, we do not explore this and simply set the weights for the positive pairs to be 1, which results in the standard contrastive loss function.
---
Rebuttal Comment 1.1:
Comment: Thank for the clarifications. I don't have a follow-up question at this moment. I would suggest the authors include these clarifications in their revision.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their quick and prompt response to our rebuttal. We will make sure to include the above clarifications in the revised paper. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their time and effort in reviewing our paper. We appreciate all of the comments and they have helped us improve our paper. Here, we first provide a summary of our paper. Then, we outline the major concerns expressed by the reviewers and explain how we have addressed them. Following this, we also provide a summary of the positive comments made by the reviewers.
**Paper summary:**
1. We consider the problem of cross-modal transfer, where representations in one modality are learned using a pre-trained model in another modality and a paired dataset that consists of pairs of the two modalities.
2. To address inefficiencies in the training objective used in existing work, we propose a novel loss function called "Continuously Weighted Contrastive Loss" (CWCL) that considers associations between all the pairs in a training batch.
3. We show that the new loss function leads to a 5-8% (absolute) increase in 0-shot task performance for the image-text modality pair. We also study our method in the context of speech-text modality pair. We provide the first results in 0-shot speech-to-intent classification and keyword spotting, where our models achieve 20-30% (absolute) improvement over existing methods.
Overall, the reviewers had the following positive comments about our paper: The reviewers feel that our work addresses an important shortcoming in existing formulations (yjNF, 1ozP)and that we propose a novel solution that is also simple and efficient (q9nH). All the reviewers expressed that our experimental results are very strong. Reviewers also found the template-robustness property of our models interesting (5jbW).
**Summary of reviewer concerns**
We categorized the reviewers' concerns into broader themes. We describe them below by providing a summary of our responses to each of them.
**Completeness of experiments and interpretation of the results:** Reviewer 5jbW felt that some experiments were missing for the image-text modality pair, yjNF had some questions about experimental results, reviewer q9nH had a question about the design choice for the similarity function, and 1ozP asked for more results on the retrieval task for the image-text modality pair. We summarize our responses below:
- We provide new experimental results in the rebuttal including a direct comparison between our method, CLIP, and LiT on the retrieval task. We also provide updated results for all the image-text experiments with a new model architecture based on the ViT-L-16+BERT-large configuration.
- In particular, our new image-text model achieves a 0-shot ImageNet accuracy of 76.5\%, which is better than the best baseline (LiT) when trained on the same dataset.
- We have provided explanations for the choice of our baseline models in our response to reviewer yjNF. We have also provided explanations for some of the results in Table 2 regarding the 0-shot performance in speech-text tasks.
- We have addressed why we chose the similarity function that we use in our paper.
- We realized that we used two different naming conventions (`CL` and `LiT`) for the baseline method that uses the standard contrastive learning loss. This led to some confusion regarding the equivalence of the experiments between the image-text and speech-text modality pairs. We will correct this to ensure uniformity.
**Comparison to existing work:** Reviewers yjNF and q9nH pointed us to a few papers that are relevant to our work. We provided details responses comparing our work to them in the rebuttal and we summarize it here.
All but one of the papers pointed out by the reviewers consider the weighted contrastive loss for supervised learning, where class information is available. We have cited the work of Khosla et.al., in our main paper which also considers contrastive loss in the supervised case. In our work, we consider self-supervised learning. In the former case, conditioned on
prior information about which pairs are positive pairs and which are negative pairs, positive pairs that are further apart are weighted highly to bring them closer, and negative pairs that are closer are weighted highly so that they are pushed apart strongly. On the contrary, in our work, there is no notion of strictly positive or negative. This is a core contribution of the paper since a softer notion of similarity helps us recreate the embedding space in one modality based on a pre-trained model from another modality.
One of the papers considers self-supervised learning and is more similar to our paper. However, since they do not use pre-trained models, they need to use a "self-teaching" process, where a model is trained initially using only standard contrastive loss and then fine-tuned using a weighted contrastive loss.
During the rebuttal phase, we performed experiments with the self-teaching strategy and we found that that performance usually does not improve much or even degrades in our experiments. This may be due to the conflicting nature of the two stages of training. We provide experimental results in our response. Further, The weighted loss they propose is designed to "down-weight" the effect of faulty positives and "up-weight" faulty negatives. In our method, we do not consider samples as positives or negatives.
**Presentation of the paper:** We also received some comments about the presentation in our paper. We will address these concerns in the revised version of our paper.
Pdf: /pdf/e1f7a12458c91bb0490cb4b8a41c4c71ad736ff7.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
A polar prediction model for learning to represent visual transformations | Accept (poster) | Summary: The authors create self-supervised video prediction model that relies on principles of the Fourier shift theorem. The approach is inspired by work suggesting that humans rely on perceptual straightened representations to support prediction. The authors first validate their method on toy cases of simple translation and rotation and later show that their model outperforms motion compensation and a some standard deep learning learning models at next frame prediction. Finally, they draw connections between their model and early visual processing in biological vision and discuss their model's impact more widely to computer vision (a simpler and more principled way to get natural video representations).
Strengths: The methods are well thought-out and of good quality. The connection between perceptual straightening and the Fourier shift theorem is clever and novel. The use of both local and multi-scale processing in the model's design is is thoughtful and clear grounded in principles of human visual processing. In general, the authors make it very clear how their method relates to prior work -- this makes it the novelty and significance of the work clear.
The analyses of the model are good. I appreciate that the authors test their approach in synthetic data and then move to larger natural video datasets like DAVIS. Some good baselines are included in the analysis (copying frames, basic CNN), and there is an analysis of the filters learned by the model.
Weaknesses: The main weakness I see not having enough baseline comparisons to existing methods in video prediction or video representation learning. The work only compares their model to a basic CNN which makes it difficult to understand how their model compares to more state of the art approaches or even older methods grounded in neuroscience like PredNet. Reporting scores for more models or adding another dataset (like Kinetics) with more performance measures like mse or lpips would address this point. In addition, more detail on compute and training requirements would help with understanding the significance of their approach.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I generally found the paper clear and interesting, but I would like to hear more on the following points:
Could the authors discuss how their model might compare new architectures ViT and diffusion models for video prediction? Could the authors report MSE, SSIM, and lpips scores to help me get a better sense of how their model compares to other approaches? To that end, do the authors have any performance results on other datasets that are used in video prediction (for example moving MNIST or Kinetics)?
Do the authors have any thoughts on other downstream tasks their video representation learning framework could be useful for? Or are there even aspects of video prediction that your model might do better at being grounded in a straightening representation space (for example long range prediction vs short range prediction)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are not widely discussed, but aside from the weaknesses I highlighted I do not see a glaring limitation that needs to be addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
* **Architectures, metrics and datasets**: we have added a Unet architecture to our comparisons, see global response. We could not find a reference PyTorch implementation of the PredNet architecture, but to ease comparison in the future, we intend to release our code upon acceptance. Since a main motivation of our study was to develop an interpretable prediction method, we did not consider architectures like ViT or Diffusion models, although we do expect that such expressive architectures would perform very well (given enough data). We also computed SSIM and observed that the results come out similar and do not change our initial interpretation, see global response. We have also applied our method to the UCF-101 dataset and again the trend in the results was similar and chose not to include it to reduce clutter. We are currently designing toy datasets to tests specific hypothesis regarding video prediction (especially at occlusion boundaries).
* **Downstream**: see applications in global response.
* **Limitations**: see global response.
---
Rebuttal Comment 1.1:
Title: response to rebuttal
Comment: Thanks you for adding the additional work on architecture, metrics, and datasets. The paper is more solid with these additions, but the essential contribution is the same. I still think the paper is solid and deserves to be accepted, so I keep my score. | Summary: Motivated by Fourier shift theorem and its group-theoretic generalization, this paper proposed a new video prediction model.
Strengths: 1. Biological modeling: computational elements can be used to describe primate V1 responses.
2. Fourier shift theorem and its group-theoretic generalization were incorporated into future prediction
Weaknesses: 1. Evaluation: only simple baselines like CNN were included for comparison. Only one quantitative metric, PSNR, was used. Other metrics like SSIM or FVD can be more informative. No demonstration of any qualitative results.
2. Loss: this framework used MSE as loss for optimization, which indicated this setting didn't consider any uncertainty factors.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: I am not familiar with computational neuroscience. Therefore, I can only provide useful feedback with respect to generative performance.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: No limitations were discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
* **Evaluation**: we computed SSIM, the results come out similar and do not change our initial interpretation, see global response. A qualitative example and its interpretation is included in Figure 7 of the supplementary material in the original submission. We intend to move it to the main text and to add more examples in the appendix.
* **Loss:** we used MSE for convenience because it is standard and amenable to optimization via gradient descent - although it is well known that image signals are not well captured by a gaussian model. If you have any suggestion, I would be curious to know what you are thinking about.
* **Computational neuroscience:** one of the main contributions of this paper is to connect predictive video processing with modeling of the early visual system in computational neuroscience. By starting from fundamentals (Fourier shift theorem) and considering an abstract formulation (learning the representation of group transformations), we have exposed the unity of these two approaches. In particular, normative models of V1 receptive fields are typically based on coding efficiency and sparse coding, while our approach relies on the prediction principle instead and naturally accounts for several well documented phenomena such as normalized simple cell responses and direction selective complex cell responses.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Most of my concerns are addressed therefore I increase my rating to weak accept.
For loss, if you want to add randomness into your prediction system, you can try to use ELBO loss. | Summary: This research develops a self-supervised representation-learning framework that uses the regularities of natural videos to make accurate predictions. The architecture is inspired by the Fourier shift theorem and trained for next-frame prediction. The approach can discover irreducible representations of smooth commutative groups in data. It achieves similar performance comparing to the traditional motion compensation and conventional deep networks, while being interpretable and fast. The framework is implemented using normalized simple and direction-selective complex cell-like units, similar to primate V1 responses, demonstrating its potential to explain how the visual system makes temporal predictions.
Strengths: - The paper leverages the temporal regularity of the natural video and propose an video compression algorithm for temporal linearization and compression. The model exploits the local symmetries present in the temporal evolution of images therefore save computational resource comparing to the end-to-end CNN models. This is novel and original for video compression and image representation learning.
- The paper suggests an important research direction: instead of producing an end-to-end generic deep learning framework, the authors leverages the temporal regularity of the natural image sequence to reduce the computational cost. Human mind utilizes certain evolutionary bias to improve its efficiency, so should we exploit these shortcut and apply them into the computer vision research.
- The paper’s method is elaborated clearly with precise mathematical notation. The presentation flow is clear although it would be benefit to draw illustrative figures to show intuitively show the main idea of the paper.
Weaknesses: - Despite the promising direction of research, the comparison between CNN models and the proposed method doesn’t demonstrate a significant improvement in terms of numbers. Further error bar would provide more concrete idea of how much the method has improved.
- Other evaluation metrics could be useful such as the time of performing compression, as well as the computational resource. It would still be a good contribution if the model can achieve the same performance but using significantly less resources. Although the paper briefly mentioned this, it would be more thorough to outline the resource saving using the proposed method.
- A description of the overall algorithm would make the presentation more clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Detailed experimental setup and dataset description are missing in the main text.
- Would be beneficial to see other evaluation metrics on compression decoding quality and efficiency would be beneficial other than only PSNR metrics, e.g. bitrate, inference time etc.
- Will the learned representation also be useful for image classification or other static tasks?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have drawn parallels between the computational steps of this approach and classical models of early visual processing in primates. However, the evidence supporting these parallels could be better substantiated. Quantitative comparison to physiological data may not be a straightforward task, and potential challenges in doing so should be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
* **Compression**: The prediction method presented in this paper does not constitute a full video compression engine. Indeed the error are not quantized and we have not considered a full rate-distortion tradeoff. But we envision a possible application of the ideas developed in this study to coding because prediction is a key step in video coding.
* **Error bars**: we added standard deviation of prediction error over multiple runs which helps interpret the relative performance of the algorithms, see Table 1 in rebuttal pdf.
* **Costs**: our claim is that the performance of the proposed methods is on par with that of more complex models while using significantly fewer parameters and remaining interpretable, see Table 2 in rebuttal pdf.
* **Algorithm description**: we added equations to describe the polar predictor algorithm, using concise vector notation, see rebuttal pdf.
* **Detailed setup**: we omitted implementation details from main text due to space limitations, this information is available in the supplementary material of the original submission and we have improved the readability of the corresponding section.
* **Metrics**: we computed SSIM, the results come out similar and do not change our initial interpretation, see global response.
* **Downstream task**: we anticipate that segmentation, rather than recognition, would potentially benefit from the representation learned on the prediction task, a more detailed study is left for future work, see global response.
* **Comparing with physiology**: We agree that quantitative comparison with physiological data is often challenging, especially given the amount of noise and the limited number of trials. In this study we operationalized a construction of visual signal prediction that can be cast in the same modeling framework that has been used to describe neural responses - and we aim to apply it to data in a future study. | Summary: The authors in this work propose a new self-supervised learning technique that is aimed to perform predictive processing of natural videos (although the approach seems broad enough to be applicable to other sequential signals with similar inductive priors to vision). The authors develop two parameter-efficient architectures (multiscale quadratic and polar prediction mechanisms, mQP, mPP) based on Fourier shift theorem and its group theoretic generalization and train them to perform next-frame prediction from natural videos. Experimental results show that their proposed abovementioned mQP and mPP architectures outperform relevant conventional motion coding baselines and a supervised CNN on video reconstruction on the two video datasets of VanHateren and DAVIS. It is very intriguing that an interpretation of the learned filters in their architecture shows similarity to canonical receptive fields observed in V1 cells, hence providing a new theoretical hypothesis for a potential function of V1 cells in performing future prediction.
Strengths: + The proposed work seems theoretically very sound, the proposed architectures mQP and mPP are designed ground up based on the extension of the Fourier shift theorem to account for transformations that occur in natural videos. Adequate mathematical and illustrative derivation of the proposed algorithm is very useful to grasp the underlying mechanism even for non-experts in this specific area such as myself.
+ It is super interesting that canonical receptive field structure identified in V1 cells is emergent from the proposed self-supervised architecture and this serves as a foundation for the hypothesis of V1 functioning in temporal straightening and future prediction.
+ Experimental results show that the proposed approach is clearly well-performing in terms of future frame prediction. These evaluations aren't very extensive like typical machine learning papers but the message from the paper is clearly conveyed in this demonstration. Additionally the link to biological modeling that I mentioned previously makes the paper a fresh read with many interesting avenues.
+ The writing is great overall and related prior work has been covered quite comprehensively.
Weaknesses: - In the experimental evaluation, it maybe good to include the number of parameters and runtime (training and inference if possible, but certainly for inference) for each algorithm. I believe this will provide a more full picture and show how much more efficient the proposed architecture is.
- It would be great if the authors also add measures of variance for the performance of various algorithms on the two datasets compared in the evaluation section.
- Although the authors have briefly touched upon potential alternate usecases of the proposed algorithm in other vision tasks (segmentation, direction estimation etc.) it would be good for readers in the broader audience to elaborate further on this.
- Limitations of the proposed work aren't discussed in much detail, I only see that one potential limitation listed here is the inability of current proposed algorithms to generalize to non-commutative groups which is left for future scope. I encourage the authors to further discuss this in more detail and add other directions they see for extension of this work to the paper.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: - Please see the weaknesses section above for my suggestions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors have addressed limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
* **Costs**: see global response and Table 2 of rebuttal pdf.
* **Error bars**: see global response and Table 1 of rebuttal pdf.
* **Applications**: see global response.
* **Limitations**: see global response. | Rebuttal 1:
Rebuttal: We thank the reviewers for their comments and questions.
The points that were raised by multiple reviewers are addressed in this global response
and the other questions are addressed in individual responses.
**Additions/extensions:**
* **Error bars**: single run prediction errors were reported in the initial submission, we now include average prediction error (and standard deviation) computed over ten random seeds - the results are assembled in Table 1 of the rebuttal pdf. Notice that the methods we intoduced (mPP and mQP) outperform simple baselines (Copy, cMC and SPyr) and rival more complex architectures (CNN and Unet) while being less variable (and also more interpretable).
* **New comparison algorithm**: we trained a Unet [1] for video prediction, it is a standard architecture for image-to-image tasks like segmentation. It has an encoder-decoder like structure with downsampling, upsampling, and skip connections between levels at the same resolution. We used a 5 levels architecture, each block consists of two convolutions, batch norm and rectification. The convolutions have filters of size 3 by 3, comprise 64 channels, and no additive bias. As expected, this architecture is very efficient: it runs fast (most of the computation is applied to spatially downsampled coefficients), it is the most performant on the VanHateren dataset, but it overfits on DAVIS (even though we limited the size of the model by using only 64 channels). This architecture comprises 20 non-linear stages (ReLU), which is to be compared to the single polar non-linearity in the polar predictor (mPP).
* **Costs**: The polar predictor described in this paper is lightweight: it runs very fast and contains few parameters (two orders of magnitude less than CNN and Unet). The polar predictor is designed as an online method that could be applied to streaming data and it is indeed very quick to train and to run. Parameter counts, as well as training and inference time are reported in Table 2 of the rebuttal pdf.
* This project was developed on a NVIDIA A100 GPU, and used small datasets: 3575 frames of size 112 by 128 for VanHateren, 7332 frames of size 128 by 128 for DAVIS.
* Note that the Quadratic Predictor (mQP) is slower, indeed we used a naive implementation of this model as it is not intended for performance but to build a connection with the computational neuroscience literature by showing how each computational element recapitulates know functional building blocks of primate V1 physiology.
* **Performance metric**: We report performance measured by PSNR (which is the logarithm of MSE in units of the signal). We also computed SSIM and observed that it did not change the trend or the interpretation of the results. A note on interpreting PSNR results: a PSNR of 0dB means that the signal and the error have the same variance, a PSNR of 20dB (resp. 40dB) means that the signal variance is 10 times (resp. 100 times) bigger than the error variance.
**Writing improvements:**
* **Unify algorithm**: Each element of the prediction computation and loss function were introduced gradually through the development of main text. To enhance clarity, we gather the equations that precisely describe the architecture and objective, using compact vector notation. These equations are at the top of rebuttal pdf.
* **Applications**: We briefly outline potential use of the learned mPP representation for other tasks.
* Compression: in typical video compression engines like MPEG, the encoder computes and transmits motion vectors (slow to compute for the encoder but fast to apply for the decoder) as well as correction bits. Since our polar predictor can very rapidly compute a predicted frame, we envision a compression method that only sends prediction errors - thereby potentially saving substantial bitrate on motion vector. In practice, the feasibility of this approach would depend on the propagation of error quantization. Actually testing these ideas at different quantization levels is a large endeavor that is well beyond the scope of the present study.
* Segmentation: visual content that moves together might belong to the same object, therefore the equivariant part of the polar predictor could be used to infer object boundaries by clustering phases changes. We expect that heading direction estimation could be approach in a similar way and leave this.
* **Limitations**: Throughout the main text, we have formulated assumptions and expressed the limitations of our prediction methods. We will consolidate them into a paragraph to be included in the discussion section of the main paper.
* Our method has several limitations. The polar and quadratic prediction mechanisms assume fixed amplitude and linear phase increments in successive frames. Therefore these parameterizations cannot capture acceleration or longer temporal dependencies, and they also cannot precisely handle fast and long range displacements or deformations. In practice, and as can be seen in the qualitative prediction examples, error often occur at occlusion boundaries. We rely on a simple MSE objective function which corresponds to a simplistic gaussian noise model on the pixels, which we know to be an inadequate description of visual signals. The synthetic translation and rotation examples show that different representations are required to capture different group transformations, but the polar predictor architecture is forced to find a compromise between the two. We are now instead considering representations that can modulate their gain in order to adapt to the transformation currently acting in the data. Finally, recall that our choice of parameterization was motivated by the representation theory of commutative Lie groups, it can therefore not handle non-Abelian transformations and we leave exploration of QP mechanism's expressivity for future work.
[1] Ronneberger, Fischer, Brox 2015 U-net: Convolutional networks for biomedical image segmentation. In MICCAI
Pdf: /pdf/632a981b55d7746e7fc091d428f7eb8f3642524c.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The authors present a self-supervised representation-learning framework inspired from the idea of continuous deformations of objects in videos. The framework allows for next-frame prediction given previous ones, and borrows ideas from the Fourier shift theorem. The framework is composed of three stages - an analysis transformation $f_w$ of given frames, a prediction transformation $p_w$ that predicts the next frame representation given previous ones, and a synthesis transformation $g_w$ which transform the latent representation to the actual frame. $g_w$ is implemented as the inverse of $f_w$, and the two share parameters. The authors present two variants of their framework, the polar predictor (PP) where $p_w$ is a fixed polar extrapolation, and a learnt $p_w$ variant termed the quadratic predictor (QP). The authors also detail results on next-frame prediction task, and show that their method surpasses baseline techniques.
Strengths: - The idea of exploiting the temporal symmetries and local deformations in videos to derive a natural framework for next-step prediction is interesting and has the potential to provide interpretability for sequence prediction tasks.
- The idea of sharing parameters between the synthesis and analysis operators, contributes clarity to the framework's underlying process.
- Multiscale processing using a Laplacian pyramid is an interesting solution to overcome video transformations of different scales.
- The proposed framework shows promising results in the provided evaluation.
Weaknesses: - Technical details regarding the implementation of the learned transformations is missing from the main paper.
- The section on multiscale processing using a Laplacian pyramid, in my opinion, is hard to follow. A figure illustrating the process would greatly help.
- The paper could benefit from some ablation tests, for example - using shared weights for $f_w$ and $g_w$ as opposed to separate weights, using different network architectures/depths and so on.
- The authors present the method as a representation-learning framework, but it's not clear how the learnt representations could assist with tasks beyond next-frame prediction? I would appreciate listing some additional possible applications. Video compression is briefly mentioned as a possibility, but no details are given (are the representations compact relative to the original frame?)
- Regarding the comparison to baseline methods - the baseline CNN architecture chosen for comparison seems quite arbitrary. On the one hand, the authors claim their method achieves comparable or better results using less parameters than the CNN, but on the other hand - no baseline CNN with the same number of parameters is given for comparison. In addition, the architecture seems arbitrary - 20 layers with the same number of channels, no skip connections in between (only one that wraps the entire network so that the predictions are residuals) and no information regarding non-linearities and/or normalization layers. It seems to me that choosing an of-the-shelf architecture for general image-to-image tasks (and slightly modifying the input/output) would have been a much better and competitive baseline. In addition, the realted work section mentions additional works on next-frame prediction (for example using LSTMs), which could contribute to the comparison. I would appreciate the author's response on this.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Have you tried using the proposed framework to predict additional frames, as opposed to only the next one? It seems quite intuitive and straightforward using the proposed method since it only requires an additional step in the polar extrapolation.
- Are there any latency constraints to the proposed approach? For example in the transformation to polar coordinates.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: After reading through the paper a few times, I couldn't find any addressed limitations (although I can't point to any unaddressed limitations).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments.
* **Implementation details**: the learned transformations and their rationale are introduced in the main text; but, due to space limitation, their full description (architectures, datasets and optimization) is relegated to the supplementary material. We improved the readability of these sections, and in the rebuttal pdf, we included equations to summarize the polar prediction method.
* **Laplacian pyramid**: we drew a multiscale diagram and included it as a figure in the rebuttal pdf, we also clarified the notation and text in the corresponding caption. The multiscale processing in mPP and mQP can be thought of as a preprocessing step: the analysis ($f_w$), prediction ($p_w$) and synthesis ($g_w$) steps are applied to the Laplacian coefficients ($\Delta \mathbf{x}$) instead of the image itself ($\mathbf{x}$).
* **Ablations**: we have explored decoupling the analysis and synthesis transformations (ie. untying feedforward and feedback weights) of a Polar Predictor (not multiscale). It achieved a similar prediction performance on the VanHateren dataset (tied: train 29.87/ test 28.83; vs. untied: train 29.99 / test 28.87 dB) and observed that the learned projection and basis vectors align as training progresses.
* **Applications**: compression and segmentation are two downstream tasks where the proposed method may prove effective, see global response for a brief discussion.
* **Comparison methods**: Although, we have not systematically optimized the hyperparameters of the CNN (we are reusing the dnCNN [1] architecture which has proven successful in other image-to-image tasks), we have added a U-net baseline - see global response. We have not studied fully recurrent architectures (such as LSTMs).
* **Beyond next-frame**: We have observed that, as expected, the quality of the prediction quickly degrades with the temporal horizon, but we have not carefully compared the performance decay rate of different prediction methods. Computing further predictions in the latent space and then generating the corresponding frames using the synthesis transform is an interesting suggestion. Now that you mention it, it occurs to us that such predictions could be compared to a recursive approach where the next frame is predicted and then fed back in as an input - we intend to run these experiments.
* **Speed**: For parameter count and latency, see global response and Table 2 in the rebuttals pdf.
[1] Zhang, K., Zuo, W., Chen, Y., Meng, D. and Zhang, L., 2017. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I have read the other reviewers' comments and the authors' responses.
I believe the rebuttal properly answered some of the concerns raised by myself and the fellow reviewers. Specifically, I appreciated the clarifications, added diagram and additional baseline (U-Net).
I have updated the score accordingly. The reason why I chose not to give a higher score, mainly lies in the currently undeveloped applications to the representation learning framework, and the fact that the U-Net baseline seems to be comparable in terms of accuracy, and superior in terms of latency, despite its simplicity (although the reduced number of parameters is comforting). | null | null | null | null | null | null |
Focus on Query: Adversarial Mining Transformer for Few-Shot Segmentation | Accept (poster) | Summary: The paper argues that mining information from the support set does not effectively improve Few-shot Segmentation (FSS) results. Instead, the paper proposes an query-centric FSS model called AMFormer, which uses adversarial learning to generate segmentation results with only rough support guidance.
Strengths: 1) The motivation in this paper is both convincing and novel.
2) The paper is well-organized and easy to follow.
3) The proposed AMFormer achieves SOTA performance on the two datasets.
Weaknesses: Major Weaknesses
1) Overpowered baseline model
In Tab5, the results of baseline model have surpassed most previous works, and the main module of this paper improves on the baseline by 5.7%. The authors should provide more details on the baseline model and explain why it achieves such good performance. (Earlier works use a similar ‘’MAP + cosine similarity’’ design [1*, 2*] and only achieve 45%~55% mIoU on Pascal)
In addition, in L282, the authors seem to add the cycle-consistant-transformer to the baseline model, which is not introduced in Sec. 3 and Fig. 3. I suggest that the authors use only the Discriminative Region Localization model (introduced in Sec. 3.3.2) as their baseline model.
Minor Weaknesses
1) Running time & inference speed
The proposed AMFormer exploits multi-scale transformers in G and D, which may increase the computational costs. Please discuss the impacts of different components brought to the model efficiency.
2) Learnable parameters in Tab. 4
Please check the learnable parameters in Tab. 4, e.g. HSNet, which is inconsistent with the corresponding paper.
3) More recent works
More recent works [3*, 4*, 5*] should include in Sec. 2.2 and Tab2, 3.
4) Generalization study of G and D
G and D should be easily applied to other methods, e.g., using an advanced method to generate the High-Confidence Map (Mqt), rather than Discriminative Region Localization model. Therefore, it is suggested the authors to investigate more about the G and D in different methods (HDMNet, BAM) to prove its effectiveness is generalizable enough.
[1*] PANet: Few-Shot Image Semantic Segmentation with Prototype Alignment, ICCV 2019
[2*] SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation, Transactions on Cybernetics 2020
[3*] Mask Matching Transformer for Few-Shot Segmentation, NeurIPS 2022.
[4*] Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning, NeurIPS 2022.
[5*] Feature-Proxy Transformer for Few-Shot Segmentation, NeurIPS 2022.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Although AMFormer achieves SOTA performance, I am confused about the results, so I give a rating of 4. I will increase the rating if the authors address all my concerns and questions.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: More training epochs comparing with some previous methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer 7rkU
Thank you for acknowledging the strength of our paper. We have carefully considered your insightful and constructive comments and here are the answers to your concerns.
**Q1: Performance of our baseline:**
**A1:** Our baseline can attain a high performance mainly for the following two aspects of reasons:
(1) As discribed in L282, we add the self- and cross-attention layers into our baseline model, which can bring a significant performance gain (the cycle-consistent constraint is not included). This is because we aim to investigate the performance improvement brought by our proposed multi-scale and query-centric strategy within the object mining transformer in the ablation study, and it is unreasonable to attribute the performance improvement from ordinary attention layers to our design.
(2) For a fair comparison, we follow BAM and HDMNet to ensemble the prediction of a base learner, which can improve the performance. We have conducted an ablation experiment to evaluate the influence of the ensemble strategy, please refer to Table 1 of our supplementary material.
As suggested, we will add a more concise baseline with only the Discriminative Region Localization module for reference in our revised version.
**Q2: Running time & inference speed of the AMFormer:**
**A2:** Following your advice, we conduct a quantitative comparison of the efficiency of the model with different components in terms of the inference speed and the FLOPs. All models are based on the ResNet-50 backbone and tested on the Pascal-20i dataset. Inference speed for all models is tested with one single RTX 3090 GPU for a fair comparison.
| | OMT-1 | OMT-2 | OMT-3 | OMT-3+DMT-1 | OMT-3+DMT-2 (AMFormer) |CyCTR| HSNet| BAM| HDMNet|
| :---: | :---: | :---: | :---: | :---------: | :---------: |:---:|:----:|:---:|:-----:|
| FLOPs(G) | 6.7 | 8.1 | 8.6 |9.4 | 10.2 |96.7 | 20.6 | 26.6| 10.6 |
| FPS | 66.2 |53.3 | 42.0 | 42.0 | 42.0 |15.1 |35.2 | 86.0| 36.4 |
Note that OMT and DMT mean the object mining transformer and detail mining transformer, respectively, and the number suffix represents the number of attention layers. As shown in the table, the AMFormer achieves faster inference speed and fewer FLOPs than some of the methods. The high efficiency of our model can be attributed to:
(1) We adopt low-dimensional features (dim=64) in all the transformer layers, reducing the computational cost and memory consumption.
(2) We further downsample the features extracted from the backbone before feeding them into the multi-scale structure, and these features have a low spatial resolution. Therefore, our multi-scale structure does not incur significant additional computational costs and memory overheads.
(3) The detail mining transformer D is not needed in the inference stage.
**Q3: Table filling error:**
**A3:** Thanks, we will fix it and check the manuscript carefully.
**Q4: Missing related works:**
**A4:** Thanks, we will certainly cite and discuss these excellent works in the revised version.
**Q5: Generalization study of G and D:**
**A5:** We appreciate your insightful suggestion. To verify the generalization of AMFormer, we integrate G and D into two advanced methods, i.e., BAM and HDMNet by taking their mask prediction as the discriminative target region $M_{qτ}$. The new models are trained in an adversarial manner.
|model |mIoU | model |mIoU |
| :---: |:---:| :---: |:---:|
| BAM |67.8 | HDMNet | 69.4|
| BAM + G+D |71.1 | HDMNet + G+D | 71.5|
Surprisingly, the integration of the G and D brings significant performance gains on both BAM and HDMNet. This result shows the generalizability of the query-centric AMFormer and its great potential of enhancing the performance of existing FSS models. Thanks for your advice and we will continue to explore this in future works. | Summary: This work proposes a new query-centric FSS model Adversarial Mining Transformer for few-shot segmentation.
However, the idea is exactly same as SSP (ECCV2022).
Strengths: This work proposes a new query-centric FSS model Adversarial Mining Transformer for few-shot segmentation.
Weaknesses: The idea is exactly the same as the self-support idea of SSP (ECCV2022).
As shown in Figure 1, this work proposes to first generate pseudo masks on the query image to mine query objects and then use the query object feature to generate the pseudo support (self-support) to match the query feature.
The self-support of SSP performs the same operation to leverage the query prototype to match the query feature.
The novelty and contribution of this work is extremely limited.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: See the weakness.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 1 poor
Limitations: See the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer 5HhF
Thanks for your comments.
---
**Q1: The idea of mask expansion is similar to the SSP.**
**A1:** There may be some misunderstandings about the contributions of our method. There are some previous methods, such as IPMT[1], that try to exploit the information of the query branch to mine the target, but the query guidance in both SSP and those approaches remains at a relatively coarse prototype granularity. We have cited and discussed SSP in **L44** and **L127**. We further evaluated the potential of the query branch for FSS by re-examining the role of support information and conducting a quantitative analysis of the similarities within and between objects. Then we reformulate it into a query-centric paradigm and hope to inspire more research.
It should be noted that the part of our method that resembles SSP, i.e., using MAP to derive the support prototype and obtaining a coarse query mask via cosine similarity, is commonly used in FSS models like PANet[2] and PPNet[3], which is not the core of our method. The main part of our method is **totally different** from SSP, specifically:
(1) **Granularity of query feature utilization is different:** SSP utilizes the holistic prototype-level query feature to match with query pixels, while our AMFormer further exploits pixel-level query features (pseudo support) to guide the classification of query pixels.
(2) **Manner of mining query mask is different:** SSP adopts cosine similarity to obtain the query mask and supervises it with binary cross-entropy loss, but we designed a multi-scale object mining transformer G to fully explore the target and a detail mining transformer D for details alignment. Besides, we supervise such refining processes in an adversarial manner.
(3) **The dependence on support information is different:** SSP uses the support prototype more than once when predicting the final query mask (Eq.(7) of SSP), whereas AMFormer merely requires the support prototype as the category guidance initially, and does not utilize any support information afterward, i.e., a real query-centric method.
[1] *Intermediate Prototype Mining Transformer for Few-Shot Semantic Segmentation*.
[2] *PANet: Few-Shot Image Semantic Segmentation with Prototype Alignment*.
[3] *Part-aware Prototype Network for Few-shot Segmentation*. | Summary: This paper proposes a query-centric FSS method, which first performs rough segmentation based on the support features, then performs mask propagation based on the intra-semantic similarities, which is supervised by an adversarial learning process.
Strengths: 1. The proposed method is technically sound.
2. The experiments validates the effectiveness of the proposed method.
3. The paper is easy to follow.
Weaknesses: 1. The novelty is relatively limited.
The idea of mask expansion based on the intra-semantic similarities is very similar to the SSP (Self-Support Few Shot segmentation): both methods first predicts a rough mask by identifying high-confidence matching regions and then refines the mask leveraging the rough mask as pseudo support features. Thus, in my view, the novelty of this paper is to supervise such refining process in an adversarial manner.
2. What is the physical rationale of the learned local proxies?
There is no explicit explication of the physical rationale of learning the local proxies. In my view, the learnable local proxies acts similarly to the learnable queries in DETR model. In this sense, each proxy aims to capture some sort of local features in an object. I suggest presenting more theoretical analysis on the physical rationale of the learned local proxies.
3. Ablation study on the impact of the adversarial learning process, adversarial learning vs direct supervised learning by comparison between the real and fake local features.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. More analysis of the physical rationale of the learned local proxies should be presented.
2. Ablation study on the impact of the adversarial learning process, adversarial learning vs direct supervised learning by comparison between the real and fake local features.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Check the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer 5HhF
Thanks for your valuable comments. We will explain your concerns point by point.
---
**Q1: The idea of mask expansion is similar to the SSP.**
**A1:** There are some previous methods, such as IPMT[1], that try to exploit the information of the query branch to mine the target, but the query guidance in both SSP and those approaches remains at a relatively coarse prototype granularity. We have cited and talked about SSP in **L44** and **L127**. We further evaluated the potential of the query branch for FSS by re-examining the role of support information and conducting a quantitative analysis of the similarities within and between objects. Then we reformulate it into a query-centric paradigm and hope to inspire more research.
It should be noted that the part of our method that resembles SSP, i.e., using MAP to derive the support prototype and obtaining a coarse query mask via cosine similarity, is commonly used in FSS models like PANet[2] and PPNet[3], which is not the core of our method. As you mentioned, the main part of our method is totally different from SSP, specifically:
(1) **Granularity of query feature utilization:** SSP utilizes the holistic prototype-level query feature to match with query pixels, while our AMFormer further exploits pixel-level query features (pseudo support) to guide the classification of query pixels.
(2) **Manner of mining query mask:** SSP adopts cosine similarity to obtain the query mask and supervises it with binary cross-entropy loss, but we designed a multi-scale object mining transformer G to fully explore the target and a detail mining transformer D for details alignment. Besides, we supervise such refining processes in an adversarial manner.
(3) **The dependence on support information:** SSP uses the support prototype more than once when predicting the final query mask (Eq.(7) of SSP), whereas AMFormer merely requires the support prototype as the category guidance initially, and does not utilize any support information afterward, i.e., a real query-centric method.
**Q2: Physical rationale of the learned local proxies.**
**A2:** Your analysis is reasonable, and similar to the decoder of DETR, our local feature construction process based on the detail mining transformer can also be interpreted as adaptively pooling features based on the matching of proxies and object features. However, due to the differences in source features, loss, and training paradigms, our proxies act differently from the object queries in DETR. Specifically, the object queries in DETR correspond to the absolute positional information of the whole image, while our proxies are focused on the foreground region specified by the mask. The constraints of diversity loss and the adversarial training make the proxies learn to capture specific feature gradient patterns (such as boundaries in different directions in Figure 6) in the foreground object features. The learned proxies enable our model to focus more on these error-prone regions, thus achieving fine-grained alignment of predictions with ground truth.
**Q3: Ablation study on the impact of the adversarial learning process.**
**A3:** We appreciate this insightful suggestion and we conduct the ablation experiment to explore the impact of the adversarial learning process. Specifically, We employ L2 loss to reduce the gap between true and false local features and train the model in a supervised learning manner. The experiment is conducted on Pascal-20i dataset using the ResNet-101 as the backbone.
| | Supervised learning | Adversarial learning |
|:--:| :------------------:| :------------------: |
|mIoU| 69.4 |70.7 |
From the table, we can observe that adversarial learning significantly outperforms supervised learning in our context. We deem the reason is that supervised learning tends to focus proxies to areas covered by the predicted mask and the ground truth, resulting in the degradation of local features. Differently, adversarial training will drive detail mining transformer D to explore more diverse and differentiated regions when building local features from real and fake object features, so as to realize the mining and alignment of different details. Thank you for your advice, we will add this discussion in our revised version.
[1] *Intermediate Prototype Mining Transformer for Few-Shot Semantic Segmentation*.
[2] *PANet: Few-Shot Image Semantic Segmentation with Prototype Alignment*.
[3] *Part-aware Prototype Network for Few-shot Segmentation*. | Summary: This paper studies few-shot segmentation (FSS). It proposes a new query-centric FSS model Adversarial Mining Transformer (AMFormer), which achieves accurate query image segmentation with only rough support guidance or even weak support labels. The core idea is to have a object mining transformer (G) that can achieve the expansion of incomplete region activated by support clue, and a detail mining transformer (D) to discriminate the detailed local difference between the expanded mask and the ground truth.
The proposed method outperforms SOTA on PASCAL-5 and COCO-20 benchmark.
Strengths: + The proposed method records strong performance over SOTAs.
Weaknesses: This is not a weakness but a question arises from not fully understanding the exp setting:
1. When you evaluate and compare against SOTAs in the experiments presented in Table 2 and 3, are the annotations, i.e. the masks used in training the the few shot learning, the same across all methods?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: see above
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer gxBQ
Thanks for your positive comments on the performance of AMFormer.
---
**Q1: About the experiment setting.**
**A1:** Yes, all the methods in Table 2 and Table 3 use the ground truth masks of both support and query images during training, and our method does not require any additional information for both training and testing phases, ensuring a fair comparison. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank all of you for your valuable insightful comments. We have carefully responded to your questions accordingly with the necessary additional experiments and analyses. We hope our responses could address all your concerns. Please let us know if you have any further advice, and we are happy to respond.
Paper457 authors
Pdf: /pdf/519c9a6f775eeb9395b4626efffa6d7103d62325.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper solves the problem of few shot segmentation. It differs from the existing view of solving this problem which leverages heavily on the exploration of the support samples. The core idea is to shift the framework from the support-centric to the query-centric. The problem method contains an object mining transformer and detail mining transformer. Through adversarial training, the method supress existing methods on two benchmarks.
Strengths: +: The idea is new for few shot segmentation. Paying more attention to the query has been less inverstigated in the literature. The paper demonstrates the benefit of query-centric solution.
+: The results are encouraging.
+: The writing is fluent and easy to follow.
Weaknesses: Table 1 is important for rethinking about the existing supprt-centric methods, more details about how the experimental detials to obtain Table 1 are necessary to include in the paper.
How about the convergence of the adversarial training?
Does the discriminative region localization module only deal with one class in the support label? How about the support image has more than one segmentation classes?
It is unclear why should we have to use the specific architectures in the proposed G and D, is there any evidence or insight for the designs of such networks?
The method achieves SOTA on most cases, not all the settings.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper is overall sound. Please see the weaknesses for questions.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The paper discusses the impact and limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer ugVV
Thanks for your valuable comments. We will explain your concerns point by point.
---
**Q1:Experimental details about Table 1.**
Previous methods aimed to comprehensively exploit the support features by adopting precise support masks to generate prototypes or to constrain the matching between support and query pixels. We investigated the necessity of exhaustive support information exploration through our foreground erosion experiments on two baselines, i.e., the prototypical learning based PFENet and the affinity learning based HDMNet. Specifically, we randomly convert some parts of the ground truth mask of the support image that were 1 (foreground) to 0 (background) with specific proportions. This operation randomly discards some target features in the support image.
The results in Table 1 of our paper show that the mining of support information has reached saturation to some extent, which inspires us to focus on the query branch. We will include more details in the revised version following your advice.
**Q2: Convergence of the adversarial training.**
**A2:** To address your concern about the convergence of the learning process, we provide the convergence curves of the training process. As shown in Figure R1 and R2, our G and D can achieve a good Nash equilibrium and converge stably.
**Q3: Whether discriminative region localization module can deal with multiple classes.**
**A3:** The AMFormer is designed for 1-way few-shot segmentation and all methods compared in our paper follow the same standard 1-way setting, i.e., there is only one category object mask in the support label. But the discriminative region localization module can well deal with multi-class support labels with only a simple adjustment. We can obtain discriminative regions of different categories $ \{ M_{qτ}^{c}, c=1,2,... \}$ according to Eq.(5), respectively, and then select the category with the largest value in the overlapping area of different $M_{qτ}^{c}$. This is a meaningful setting, and we will continue to explore it in future work.
**Q4: Evidence or insight for the designs of our networks.**
**A4:** **For the design of G:** G is responsible for completely mining the foreground area under pseudo-support guidance, which can also be implemented with other FSS model structures. We adopt the multi-scale attention layers to deal with spatial-inconsistent and effectively utilize the intra-object similarity to equip potential foreground pixels with beneficial information. The effectiveness of multi-scale structure and attention layers has been proven in many semantic segmentation models such as [1]. Our ablation experiments also effectively prove the effectiveness of the design of G. **For the design of D:** We design D to mine the local object features corresponding to the predicted mask $M_e$ from G and the ground truth mask. Driven by adversarial training and diversity loss, D can adaptively capture the pattern of error-prone local features (such as boundaries) as shown in Figure 6 of the paper. These local features are targeted constrained, which enables the model to focus more on error-prone regions. However, only using the cross-entropy loss to constrain the prediction of $M_e$ will evenly back-propagate the loss gradient to all feature pixels, making it more difficult for the model to achieve fine-grained alignment.
**Q5: Not SOTA on all settings**
**A5:** The most reliable metric for performance comparison is the average mIoU over 4 folds, on which we have achieved state-of-the-art results. As suggested, we will reconsider the wording in the revised version.
[1] *SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.* | Summary: In this paper, the authors introduce a concept for solving the few-shot segmentation problem using weakly supervised learning concept. The method relies on weak support labels like scribbles or bounding boxes that highlight the object regions in the query. The segmentation is achieved through two blocks - the object mining transformer and detail mining transformer - during training, while only the object mining transformer is needed during inference. The proposed approach outperforms other methods, as demonstrated by the results and ablation studies. The writing is clear and helps readers understand the proposed concept effectively.
Strengths: The authors have developed a solution that can solve the few-shot segmentation problem under more practical scenarios. This means that only partial annotation on the support images is required, making it a practical tool. Additionally, the query-centric concept can lessen the reliance on support images and overcome the differences between support and target images of the same class, making the solution more applicable. Although the detail mining transformer approach proposed with part-level adversarial training can slow down training, it is not necessary during inference. Lastly, the solution works well as AMFormer outperforms previous works significantly.
Weaknesses: 1. It seems that the model complexity is quite high due to the large-scale backbone, transformer-based object mining model, and multi-scale structure.
2. I am unsure about the task division among the proxies, and it seems that the number of required proxies might depend on the complexity of the data. I hope that the design of this part can be made more clear and understandable so that it would be easier to debug in case of any issues.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Incorporating adversarial training may affect the convergence and stability of the learning process. Can you provide more information on this?
2. Do you notice any signs of over-fitting in the suggested method?
3. How does the model complexity compare to the competing approaches in Table 2 and Table 3?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Have you considered how well the query-centric method will perform when dealing with small targets, like in satellite images? Specifically, I am wondering about its generalizability for different datasets, ex: satellite images or medical images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer WjSB
Thanks for your valuable comments. We will explain your concerns point by point.
---
**Q1. About the model complexity.**
**A1:** In Table 4 of the paper we compare the number of learnable parameters of our model with several state-of-the-art FSS models. Following your advice, we conduct an additional quantitative comparison of the model complexity in terms of the inference speed and the FLOPs. All models are based on the ResNet-50 backbone and tested on the Pascal-20i dataset. Inference speed for all models is tested with one single RTX 3090 GPU for a fair comparison.
|Methods| mIoU | Learnable Params(M)| FLOPs(G) | FPS |
|:-----:|:----:| :----------------: | :-----: |:---: |
| CyCTR | 64.0 | 5.6 | 96.7G | 15.1 |
| HSNet | 64.0 | 2.6 | 20.6G | 35.2 |
| BAM | 67.8 | 4.9 | 26.6G | 86.0 |
| HDMNet| 69.4 | 4.2 | 10.6G | 36.4 |
| Ours | 70.7 | 5.1 | 8.6G | 42.0 |
As shown in the table, the model complexity of the proposed AMFormer is comparable to existing methods, and it achieves faster inference speed and fewer FLOPs than some of them. The relatively low model complexity can be attributed to the following:
(1) We employ the same backbone as most of the current FSS models and keep the parameters frozen throughout the training and testing process.
(2) We adopt low-dimensional features (dim=64) in all the transformer layers, reducing the computational cost and memory consumption.
(3) We further downsample the features extracted from the backbone to low resolution before feeding them into the multi-scale structure. Therefore, our multi-scale structure does not incur significant additional computational costs.
(4) The detail mining transformer D is not needed in the inference stage.
**Q2: The task division among the proxies.**
**A2:** In L301 of the paper we briefly analyzed the task division of different proxies. As shown in Figure 6 of our paper, we visualize the region activated by different proxies to analyze what they mainly focus on. It can be observed that most of the proxies concentrate on the object boundaries of various orientations, and there is a clear correspondence between proxies and directions. For example, proxy 1 always focuses on the right boundary of the object, while proxy 3 always focuses on the bottom when processing different images. In a nutshell, the task division among the proxies is clear and consistent.
We conducted hyperparameter experiments on the number of proxies on Pascal-5i and COCO-20i to investigate how the datasets' complexities affect the optimal proxies number, and we obtained a consistent optimal proxies number (num=10). This indicates that the optimal setting of proxies is not sensitive to the data complexity. We also took into account different data complexities when organizing Figure 6: we present images including large objects, small objects, multi-categories, and multi-instances, respectively. It is recommended to refer to Figure 3 of our supplementary material for more examples.
**Q3: Convergence and stability of the learning process.**
**A3:** To address your concern about the convergence of the learning process, we provide the convergence curves of the training process. As shown in Figure R1 and R2, our G and D can achieve a good Nash equilibrium and converge stably. To verify the stability of our method, we repeated the training of the model 5 times under the same settings (split0 of Pascal-5i with ResNet50 backbone) and recorded its performances. The table below shows that our model is stable across different runs.
| | exp1 | exp2 | exp3 | exp4 | exp5 |
|:--: |:---: |:---: |:---: |:---: |:---: |
|mIoU | 71.4 | 71.6 | 71.2 | 71.1 | 71.4 |
**Q4: Signs of over-fitting.**
**A4:** Overfitting is a common issue in FSS tasks. Following previous methods, we adopted a series of measures to avoid overfitting, such as freezing the backbone during training, using intermediate-level features for subsequent processing, and minimizing the number of learnable parameters. To verify this visually, we provide the mIoU curve on the validation set in Figure R3, from which we can observe that our method is invulnerable to over-fitting.
**Q5: Generalization to other datasets.**
**A5:** We design experiments on cerebral neuron images from Electron Microscopy to validate the generalization of our method as well as its ability to handle small targets. We use Cremi-A and Cremi-B datasets [1], which contain neurons from two different brain regions. Because of their large morphological differences, we adopt neurons from Cremi-A as the base class for episodic training and neurons from Cremi-B as the novel class for testing. The model predicts the foreground region where the neurons are located, and we adopt the commonly used watershed post-processing to generate different instances of neurons.
The segmentation results are shown in Figure R4. Our model is able to segment the dense and small neuron targets accurately (yellow box), proving that our method is equally effective for small targets.
|Method | F1 Score |
|:-----:|:-----:|
|SuperHuman [2] | 0.756 |
| Mala [3] | 0.783|
|Ours | 0.851 |
We also compare other neuron segmentation methods [2,3], which are trained on Cremi-A and fine-tuned with a small number of samples from Cremi-B. As shown in the table, the results show that the AMFormer outperforms these specific models, demonstrating the generalization ability of our method on other datasets (e.g., Medical images).
[1] *https://cremi.org/data/*
[2] *Kisuk Lee et al. Superhuman accuracy on the snemi3d connectomics challenge.*
[3] *Jan Funke et al. Large scale image segmentation with structured loss based deep learning for connectome reconstruction.* | null | null | null | null |
Analyzing Vision Transformers for Image Classification in Class Embedding Space | Accept (poster) | Summary: The authors propose a method that explores properties of vision transformer (ViT) features. In particular, tokens of patches at various levels of the sequence of transformer blocks are projected to class-space in the case of models pre-trained for image classification. Building of this, authors provide various insights into representations spaces of ViTs using quantitative experiments.
Strengths: 1) The paper explores an interesting and useful direction on better interpretability of ViTs
2) Methods proposed can be useful to community
Weaknesses: 1) Novel contribution over key related work [10,12] and additional insights in the form of quantitative experimental results seem insufficient to provide any strong information on ViTs
2) Projecting intermediate features to class space with a weight matrix from the last layer is not directly meaningful - maybe explain the reasoning / assumption why this would provide any useful information? The differences in results across layers could be due to misalignment (between feature and projection weights) and not necessarily lack of information.
3) How does explicit aligning of intermediate representations to class space (e.g. see [1]) affect findings? Maybe linear projection of those features to better verify what information is contained within them?
4) Results and discussion in section 5 are not explained well
[1] Naseer, Muzammal et al. “On Improving Adversarial Transferability of Vision Transformers.” ArXiv abs/2106.04169 (2021): n. pag.
Technical Quality: 1 poor
Clarity: 3 good
Questions for Authors: The statement “we can project the internal representations of ViT into the class embedding space to probe their categorical representations” is not well supported by explanation or results (see weaknesses above for more). This is assumption is central for almost all insights provided by paper. In fact, this leads to multiple missing links in various statements made in paper (and explanations given to results).
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 1 poor
Presentation: 3 good
Contribution: 2 fair
Limitations: No clear discussion of limitations or possible impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback which has helped 1) clarify in our manuscript the rationale, relevance, and novelty of our work, and 2) improve the generalizability and clarity of our findings. We address in detail each concern stated in the revision below.
_On the rationale behind our method_
We argue that projecting intermediate representations to the class-embedding space allows us to investigate how the hidden states increasingly represent the class prototype learned by the model. This assumes that the class prototype is encoded in the embedding weights of the class-projection matrix (an assumption that has been made before, see for example [1]). The new experiments comparing our framework with linear probing studies (see general response) give evidence that our method can better detect the features that lead to a categorically meaningful representation. Note that we do not use our framework to claim that early layers lack information that, after linear and non-linear transformations, will be relevant for making categorical decisions. In fact, we show that from very early layers of the model (including layer 1 for some ViT variants) there is significant alignment to the categorical representation as compared to a random model. Instead, our framework shows how projection to the class embedding of intermediate features allows us to investigate the factors (i.e. token position) and mechanism (i.e. via key-value memory pair mechanisms) that increase the alignment with the class prototype.
We have clarified these issues in the new version of our manuscript.
_On the novel contributions of our work_
Regarding the novelty of our contributions over [10,12], the key additional advances of our work are the following:
- We introduce a generalizable interpretability framework that can be used to efficiently investigate the categorical building processes of ViTs. We strengthen the generalizability and advantages of our framework by using the new experiments reported in the general response to show that: (1) We can apply our method to analyze the representations of ViTs that have learned a class-embedding matrix during training; (2) Our framework can distinguish the effects of different training variants (e.g. datasets, architectural depth or constraints) in the build-up of class representations; (3) Our method can characterize how categorical information emerges in the image tokens more efficiently and accurately than the commonly used linear probe method (see next section for additional details).
- We use our framework to investigate how a specific mechanism not mentioned in [10,12], namely key-value memory pair systems, is used by different layers of ViTs to add semantically meaningful information to the residual stream and build categorical representations. Previous work in NLP showed that these mechanisms can be exploited for model editing, efficiency improvements at inference time, or performance improvements, and thus we consider that our study opens future avenues of research. Moreover, our new experiments (see general response) showing that these mechanisms are present across ViT variants also give evidence of the broad impact of understanding key-value memory pair systems.
We clarify and reinforce the importance of these findings in the “Introduction” section of the new version of our manuscript.
_On the comparison with linear probing methods_
In the general response, we provide a comparison of our framework with that of linear probing studies. In these experiments, we specifically show how quantifying the “class-relevant” information contained in the inner representations of ViTs with linear probes does not necessarily enable insights into the features and mechanisms that the model uses for making categorical decisions. This is in contrast with our method which enables such insights.
_On the analysis of ViTs with explicit intermediate alignment to class representations_
Thank you for suggesting applying our framework to ViTs trained by Naseer et al., as it has led to meaningful insights and has strengthened the generalizability of our work.
As shown in Table 1 and Fig. 1 of the rebuttal PDF, results suggest that aligning the intermediate representations of the tokens to the class embedding space leads to 1) increased class-identifiability scores in [CLS] tokens across all layers; 2) more class-identifiable image tokens in the last layer; 3) less class-identifiable tokens in middle layers. Moreover, as shown in Fig. 2 of the rebuttal PDF, these ViTs also make use of key-value memory pair systems and, unlike other networks, extend this use to block 12. The difference might be due to the use of the image tokens in MLP layer 12 to predict the correct category in the class-refinement module, compared to common ViT variants where only the [CLS] token in MLP layer 12 is used for prediction.
_On the clarity of Section 5_
We have now improved the discussion of the results in Section 5, and we:
- Explain some of our results in light of previous work, and provide more interpretation of the patterns that we observe.
- Rephrase some of our statements to improve clarity.
- Improve the captions of Table 1.
If there are other concrete concerns with the clarity of this section other than those mentioned, we will be glad to further work on them.
_On the discussion of limitations_
Note that we do mention the limitations of our work and its impact in the original manuscript; specifically in the original conclusion section. If a specific aspect that the reviewer considers important and was overlooked in the description of our limitations, we are glad to include it in the new version of our manuscript.
We have benefited from your comments and suggestions, and believe we have addressed them favorably. If so, please consider raising the reviewer rating.
[1] Hill, F. “Why transformers are obviously good models of language.” (2023).
---
Rebuttal Comment 1.1:
Title: Response to reviewer
Comment: I thank the authors for the rebuttal.
However, my two key concerns, weaknesses 1 and 2, remain unresolved. I keep my rating as it is.
On weakness 2, *"This assumes that the class prototype is encoded in the embedding weights of the class-projection matrix (an assumption that has been made before, see for example [1])."*, this assumption still is vague with references to non-peer reviewed works attempting to support it.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We would appreciate making the concerns concrete (please see below).
Regarding weakness 2, we want to clarify that our assumption (actually, an interpretation) is both theoretically and empirically supported.
It is supported theoretically by the fact that the final classification layer is only reading information from the class-embedding weight matrix, which thus has to encode the patterns that allow the network to predict each class. We assume these patterns reflect the class prototypes.
This interpretation is trivially true courtesy of the image classification mechanism of the transformer architecture. The decision is a function of the dot product between the [CLS] token in the hidden state of the last layer and each row (“class prototype”) in the class embedding matrix. The probability of the image belonging to a category is proportional to this dot product. Our interpretation in terms of “class prototype” follows directly from the definition of dot product, since the latter will reflect how close the [CLS] token is to a class row.
Our interpretation is further supported empirically in our experiments given that the alignment of each token to the class-embedding projection is predictive of the relevance that this token has in the categorical decision, as demonstrated in our perturbation studies.
More broadly, it would be useful to better understand why the reviewer’s concerns remain unresolved. In responding to weaknesses 1 and 2, in our rebuttal and revised manuscript (1) we clarified the novelty of our work by pointing out two broad contributions that are absent in the literature, and (2) we explicated the rationale of our method, in particular, why our method is informative for our research goals and why linear probes are not appropriate for such purposes (please see rebuttal for details).
To improve the quality of our work, we would appreciate it if concerns with the statements made in our rebuttal of weaknesses 1 and 2 could be made concrete. | Summary: The authors propose to reverse-engineer pre-trained ViTs for image classification task in order to investigate how the internal representations at different levels are projected onto the class embedding space and reveal how the models construct representations for predictions. It provides insights into the distinct contributions of self-attention and MLP layers in ViTs to the categorical composition. The proposed method can further identify important image regions for class detection as a valuable toll for achieving mechanistic interpretability and explainability in ViTs.
Strengths: 1. This paper presents a pioneering approach to reverse-engineering pre-trained ViTs for image classification tasks, offering new insights into how ViTs construct representations for their predictions. While the concept of reverse-engineering is inspired by NLP research, this is the first work to apply it specifically to ViTs in computer vision tasks.
2. The authors introduce a framework that enhances mechanistic interpretability and explainability in ViTs, enabling the identification of the most relevant image regions for detecting a specific class of interest.
3. The paper emphasizes the distinct roles of self-attention and MLP layers in this process, illustrating how they contribute differently to categorical updates by utilizing compositional key-value memory pair mechanisms.
4. To evaluate their findings, the authors employ several metrics including the class-value agreement score, key-value agreement rate, class similarity change rate, and match between the top-1 prediction.
5. The paper is highly accessible with clear logical flow. The tables and figures are presented in a manner that is easy to read and understand, contributing to the overall clarity of the research.
Weaknesses: 1. The findings might differ when using this method to reverse engineer transformers that are larger, have been trained with different datasets, or contain architectural modifications. For example, the paper focuses on vanilla ViTs trained on ImageNet, and it is unclear how well the proposed framework generalizes to other types of ViTs or other datasets
2. The paper does not provide a comprehensive comparison with other methods for interpretability or explainability of ViTs, although it does mention some related work in this area.
3. Certain concepts require additional investigation. The reasons behind the distinct performance of block 11 compared to other blocks, as well as the rationale for summing the gradients across the blocks, remain unclear and warrant further study.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Why there are performance differences betwee block 11 and other blocks?
2. Why sum the gradients over the blocks?
3. What does it mean in line 307-308?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the careful reading and useful feedback. We address the comments and questions below.
_Weakness 1_
Thank you for the suggestion. We have now demonstrated the generalizability of our method to all cases mentioned by the reviewer (see general response) and expanded our analyses to other variants of ViTs, including larger models, trained on different datasets, or containing architectural modifications (e.g. use of GAP). The new experiments demonstrate that we can apply our method to investigate a wide range of ViTs.
_Weakness 2_
Thank you for suggesting comparisons with other mechanistic interpretability and explainability methods.
Regarding mechanistic interpretability, we have added a comparison with linear probing approaches (see general response), which are widely used to study if categorical information is present in the hidden states of DNNs. Our new experiments show that the insights we can obtain from linear probes are very different from those of our method, and less informative for the purposes here.
Regarding explainability, our approach is complementary to existing methods and provides different kinds of insights. Specifically, SOTA methods that generate relevancy maps for a classification example compute the gradients of the class logits at the final layer with respect to the input and/or aggregate the information propagated across layers up to the class-embedding projection. Instead, our method is able to visualize the categorical information contained in the image tokens independently for each block. These block-specific visualizations allow us to (1) better understand how categorical information is hierarchically built, and (2) characterize the importance of each block in building the class representations. We clarify this conceptual difference between methods in the new version of our paper and provide some examples of layer-specific visualizations.
Regarding question 2, we originally suggested aggregating the gradients over blocks to show that our framework can also provide a global relevancy map that is semantically meaningful and can be used for traditional explainability visualizations. The sum procedure also allows us to corroborate that we obtain a fair portrayal of the individual contribution of each block to the final categorical representation: if the aggregated visualization provides a coherent and useful relevancy map (measured with perturbation studies), there is evidence that the interpretation of the individual blocks is meaningful. We have explained why we carried out this sum explicitly in the new version of our paper.
In addition, we compare the quality of our global relevancy map to that of an established explainability method (reference [3] in our paper) to further prove the accuracy and quality of our proposal. Using ViT-B/32, for both methods we (1) quantified the importance of each token, (2) gradually removed the tokens with the least to most importance scores (negative perturbation test), (3) gradually removed the tokens with the most to least importance (positive perturbation test), (4) measured the accuracy of the model with each removal, (5) computed the Area Under the Curve of the final accuracies. Our results show that our methods yield similar results to those of [3] (Table 2 in rebuttal PDF), highlighting the adequacy of our approach.
_Q1_
We found that in VIT-B/32, MLP 11 promotes the strongest categorical updates via key-value memory pair mechanisms (section 6.2). This could explain why if the activation of the key-value memory pair system in MLP 11 is high (as measured by the key-value agreement rate), a better classification accuracy will be obtained as a result of a stronger categorical update.
Regarding why the strongest categorical updates are encoded in MLP 11, we hypothesize it is due to this being the last layer from which the [CLS] token can extract information (through the self-attention layer in block 12) to make a categorical decision. In turn, MLP layers promoting stronger categorical updates at the very final layers could be useful because the depth allows the model to develop more complex and semantically meaningful keys (as shown in NLP [1]).
Our new experiments provide additional evidence in favor of these hypotheses: 1) Models with deeper architectures present stronger categorical updates in the layers before the final one, not in layer 11 (Fig. 2 in reb. PDF); 2) When training explicitly aligns intermediate representations to the class embedding space in every layer, and thus the network is able to use the image token representations in MLP 12 to make categorical decisions, we found that the strength of the categorical updates in MLP 12 is even higher to those of MLP 11 (see Refinement ViT in Fig.3 of PDF).
In addition, we also note that in the self-attention 12 the strength of the categorical updates and the activation of key-value memory pair systems is higher than those of earlier blocks. This is further evidence that ViTs exploit key-value memory pair systems for promoting categories at the very late stages of the network so that the key representations can encode greater semantic complexity.
We have added this interpretation of the results to our paper.
_Q2_
See discussion of weakness 2.
_Q3_
If the question refers to our compositionality results, another way of stating these findings would be: in at least 80% of the cases, the final predictions of these layers do not exactly match the categories promoted by the most activated keys. Instead, results suggest that in the majority of cases, the final prediction is a composition of the categorical representations promoted by more than one key. The strength of this compositionality varied across blocks and layers.
We thank you for the suggestions again, which we consider to have been addressed successfully. If so, please consider raising the reviewer rating.
[1] Geva et al. "Transformer feed-forward layers are key-value memories." (2020).
---
Rebuttal Comment 1.1:
Title: Thank you for detailed clarifications.
Comment: Thank you for detailed clarifications. Some of my questions have been addressed. I am inclined to raise my score. | Summary: This work analyzes how Vision Transformers work by analyzing the representations of individual tokens (image patch representations) and how they evolve while passing through the layers of the network. The authors also show how to use their methods to devlop an interpretability method.
Strengths: Originality: The work applies experiments originaly proposed in the NLP space to analyse Transformers to the Vision Transformer. I am not aware of anyone having done these types of experiments for ViT before, and the insights gained this way are interesting.
Quality: The experiments seem solid, and while I have doubts about a small part of them (see weaknesses) they are overall well done. They are well thought out and test simple hypotheses. The value of these experiments has already been validated in the NLP space.
Clarity: I found the experimental design was clear and the exposition easy to follow.
Significance: I think this work contributes to our understanding of how Vision Transformers work. In themselves they don't offer completely new insights, but confirm existing knwoledge/intuition/theories and add empirical evidence that aid our understanding. I think the work is easily understandable and accessible, and gives insights that help understand the inner workings of one of the currently most-used models. I think the contribution is valuable.
Weaknesses: * The paper investigates the original ViT architecture as proposed in the 2021 ICLR paper. It completely ignores that we know now that ViT does not need a CLS token (I think this was first proposed in "Scaling Vision Transformers" by the original authors of ViT in shortly after their first paper). This paper independently confirms this finding, but simply citing that paper would have been easier. Also, by analysing a ViT model that used a Global Average Pool (or MAP) to classify would remove one additional cofounder. I'd appreciate if the authors could briefly mention/discuss this point in the next version of the paper (I feel like part of the reason people keep using CLS-token ViT is that every other paper does it, so I'd encourage the authors to point out that this isn't needed any more).
* A potential relevant related work is "Understanding Robustness of Transformers for Image Classification", ICCV 2021, Bhojanapalli et al.,. The 2nd half of that paper also tries to understand how ViT work by ablating parts of the modle (e.g. by removing individual Self-Attn. and MLP layers), which feels similar to the pertubation studies presented in Table 1.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: No questions come to mind.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors do not discuss limitations of their work, but given the nature of the paper (empirical exploration instead of proposing a new method) this does not apply as much.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and suggestions for improving the discussion of our findings. Below we address each of these in detail.
_Weakness 1_
Thank you for pointing out relevant previous work of [1]. We have added a reference in section 5.1 of our manuscript, stating in line 195 that “These results are aligned to those of [1], where it is shown that ViTs trained without the [CLS] token can achieve similar performance to models that include it”.
We also want to highlight that our experiments are not redundant but complementary to [1] in the following sense. Our results show that even in ViTs trained with the [CLS] token, the image tokens can achieve class decodable information without extracting information from the [CLS], which sheds light on the mechanisms behind the building of categorical representations.
Moreover, as described in the general comment, we carried out additional experiments and used our method to probe ViTs trained with GAP. We found that the training with GAP changes the mechanisms by which the network creates categorical representations. Concretely, while introducing GAP does not decrease the class identifiability scores of image tokens across layers (see Fig. 1 of the rebuttal PDF), it decreases the class identifiability rate of the tokens in the last layer (i.e. the percentage of tokens that contain a class identifiability score of 1). In addition, as Figure 3 of the rebuttal PDF shows, GAP training decreases the reliance on key-value memory pair mechanisms for building categorical representations. Taken together, these findings corroborate the idea that the categorical representations on GAP-based ViTs emerge in a more distributed fashion than in [CLS]-based ViTs. We have added these findings and their discussion to the new version of our manuscript.
_Weakness 2_
Thank you for highlighting the relevant work by Bhojanapalli et al. (now cited)
Our analyses are complementary to those of Bhojanapalli et al. who investigate the effects of removing self-attention and MLP layers in the performance of ViTs. In contrast, we analyze in detail how the categorical representations of ViTs are modified and built by these sub-modules, regardless of performance. We concretely investigate the use of key-value memory pair systems in these processes.
In addition, we think our perturbation studies complement the findings on the robustness of ViTs in Bhojanapalli et al. We show how context-tokens are necessary for building a categorical representation in the class-labeled image tokens, which points to how much information unrelated to the class the model is including in the category representation of the embedding matrix.
We have added a discussion of the work of Bhojanapalli et al. and the mentioned differences with our study to the “Related Work” section.
Thanks again for the feedback. We believe we have addressed all your concerns. If so, please consider raising the reviewer rating.
[1] Zhai et al. "Scaling vision transformers." (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and the additional experiments. I stand by my original review that this work should be accepted to the conference. | Summary: Inspired by recent advancements in NLP, this paper introduces a novel framework designed to reverse engineer vision transformers for the purpose of image classification tasks. The framework focuses on analyzing the internal dynamics of Vision Transformers (ViTs) within the class-embedding space, revealing the intricate process by which ViTs construct categorical representations using self-attention and MLP layers. The empirical findings gleaned from this investigation offer valuable insights into the inner workings of ViTs. Furthermore, the proposed framework can be utilized to identify the crucial components within an image that play a significant role in class detection.
Strengths: 1. The research presented in this paper enhances the current understanding of interpretability and explainability in ViTs. While previous studies have primarily examined the preservation of spatial representations in ViTs throughout the hierarchy, this work specifically investigates the construction of categorical information for the final prediction.
2. This research demonstrates the intriguing phenomenon of internal disruption in categorical representations caused by context and attention perturbations.
3. The Experiment Design in this study encompasses a comprehensive range.
Weaknesses: 1. It would be beneficial to include a thorough discussion in Section 3 regarding the distinctions between the proposed framework and similar work in NLP, which also involves projecting the internal representations of these models onto the output space. Including a comparison of the technical details between the two approaches would enhance the clarity of the paper.
2. The empirical results provided in the study primarily rely on the analysis of ViTs pre-trained on ImageNet. To strengthen the findings, it would be advantageous to incorporate results obtained from ViTs pre-trained on larger datasets. This expansion would offer a broader perspective and further validate the conclusions drawn in the research.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See the weakness part.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: In Section 8, the authors discuss the limitations of the adopted model's diversity and the underexplored application for performance improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive remarks and suggestions for improving the clarity and generalizability of our work. Below we address these comments in detail.
_Weakness 1_
Thank you for the concrete suggestion on how to improve the description of our method. In our revised version of the paper, we have added a subsection thoroughly comparing our approach with that of NLP.
Due to spatial constraints, we cannot reproduce the entire subsection here. As a summary of the added information in this section:
- We describe how the input- and output-embedding matrices of both types of networks differ. Pre-trained LLMs include a vocabulary-projection matrix that serves both as the input and output embeddings of the model. In contrast, the nature of the input and output embedding matrices differ in ViTs: the input embedding matrix is a linear projection of the image patches, while the output matrix performs a categorical projection.
- We note that the semantic task learned by these networks is different: Pre-trained LLMs are trained to predict the next word of a sentence (only auto-regressor LLMs were investigated in similar previous NLP studies), while ViTs for image classifications are trained to predict the semantic label of an image.
- We discuss the implications of the above differences and how they may lead to differences in the information that is encoded in key-value memory pair systems. Particularly, we give a more thorough explanation as to why interpreting the keys of ViTs is not as straightforward as in the LLM case: while the input space of LLMs continues to be relevant throughout the network’s hierarchy because it is also the space that is projected to in the output prediction (see [1]), in ViTs the input-space is no longer relevant for later projections. That is why our current work focuses on analyzing the semantics of this system’s value vectors. We however note that future work can be dedicated to finding alternative ways of investigating what is encoded in the keys of these systems.
_Weakness 2_
We agree on the value of investigating the generalizability of our framework and findings by probing ViTs trained on other datasets. We have now done so more thoroughly. As described in detail in the general response, in the new version of our paper we have expanded our analyses to a ViT-B/16 fine-tuned on CIFAR100, and a ViT-B/16 trained on a higher-quality and a multi-labeled version of ImageNet-21k (MIIL) [2].
We found that our framework can be used to analyze both variants of ViT, by successfully translating their [CLS] and image tokens across layers into the class-embedding space to get meaningful insights into how they build categorical representations and the effect of the training dataset in these processes.
Concretely, we found that fine-tuning on CIFAR100 increases the identifiability score of tokens, from the very first layers of the model (see Fig. 1 in rebuttal PDF). This result suggests that fine-tuning ViTs in small datasets creates a class-embedding representation that is potentially overfitted to detect simple patterns in the image. Training in MIIL improves the identifiability of tokens (especially the [CLS]) without showing overfitting patterns, compared with a vanilla ViT-B/16 (see Fig. 1 in rebuttal PDF).
Fig. 2 in the rebuttal PDF also shows how both variants rely on key-value memory pair mechanisms, with MIIL showing the strongest key-value agreement rates of all networks, meaning that more tokens promote categories using these mechanisms than other ViT variants.
We have included these experiments and interesting findings in the new version of our manuscript.
We thank you again for the concrete suggestions that we believe to have sufficiently addressed. If so, please consider raising the reviewer rating.
[1] Dar et al. “Analyzing Transformers in Embedding Space.” (2023).
[1] Ridnik et al. "Imagenet-21k pretraining for the masses." (2021).
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: Thank you for spending the time addressing my initial queries with the paper. Your inclusion of the discussions on the differences from similar work in NLP and an investigation on the generalizability of the proposed framework is appreciated. Kindly ensure that these discussions are seamlessly integrated into the revision.
As of now, I would like to remain my score. Looking forward to the opinions of the other reviewers. | Rebuttal 1:
Rebuttal: We thank the reviewers for feedback that helped improve our manuscript. In the general response, we address two common concerns about our work: we (1) report new experiments that provide evidence of the generalizability of our method, and (2) conceptually discuss and empirically demonstrate the advantages of our framework over linear probes. In all cases, we obtain results that are informative and favor our proposed method.
1) Generalizability
Some reviewers raised the possibility that our framework might not generalize outside vanilla ViTs. We agree that more experiments are needed to demonstrate generalizability. We have thus tested 5 additional variants of ViTs and obtained favorable results in all cases.
Specifically, we separately probed ViTs that are 1) larger (ViT-L/16); 2) fine-tuned for other datasets (CIFAR100); 3) trained on higher-quality datasets (MIIL) [1]; 4) trained with a refinement module that aligns all tokens’ intermediate representations to class space [2]; and 5) trained with Global Average Pooling (GAP) instead of the [CLS] token.
Our findings demonstrate that we can successfully translate the [CLS] and image tokens of all these variants into the class-embedding space across layers. As depicted in Table 1 (see rebuttal PDF), the identifiability rate of the image tokens in the last layer was above chance in all models. In addition, Fig. 1 shows class identifiability increases over blocks for all variants. We also found that in all networks, class-labeled tokens have higher identifiability scores than context-labeled ones, which illustrates that our method can be used to characterize the emergence of categorical representations in image tokens.
Our results also prove that we can use our framework to inspect the role of attention and MLP layers across ViT variants, and their use of key-value memory pair systems (Fig. 2). Although we observe similar patterns across networks, our method also reveals quantifiable differences indicating how the modification of vanilla ViTs affect the building of categorical representations. We discuss these differences in the responses to each reviewer comment.
We have added the above findings to our paper and: (1) swapped Fig. 2a and 4 (main paper) with Fig. 3 and 4 (PDF), (2) discussed the similarities and differences across ViT variants, and (3) included figures for each network in the supp. material.
2) Comparison to linear probing
Some reviewers point out that we do not discuss the advantages of our method over the commonly used linear probing approach. In the revised manuscript we perform experiments to demonstrate that linear probing is less informative for our research question.
We first note that these two methods differ in the type of insights they can provide. Although both aim to characterize the information encoded in the hidden representations, only our framework can quantify how these intermediate representations increasingly align with the class prototype learned by the model (encoded by the weights of the class-embedding matrix). Furthermore, our method additionally characterizes the inner mechanisms of this alignment (e.g. by investigating key-value memory pair systems via the projection of ViT’s parameter matrices).
In contrast, linear probes measure the linear separability in the hidden representations of samples of different classes [3] but do not reveal whether the learned separability is exploited by the model in categorical decisions. Thus, linear probing does not necessarily uncover the relevant factors behind category-building processes in ViTs. In other words, the successful class probing of tokens in a given layer can rather reflect the separability of confounders whose feature representations are ignored by the model in the class-embedding space. For example, for a particular token and layer, a linear probe may use internal representations of background information that is highly correlated with the presence of a class in a specific dataset, to distinguish between categories. We show this is likely through the experiments described next.
Following [12], in the revised paper we trained separate 10-shot linear classifiers on ImageNet for each token position and layer of a VIT-B/32. To test if the information learned by these probes shed light on the categorical decisions taken by the network, we conducted negative and positive perturbation tests. Concretely, we quantified the class identifiability scores obtained from the linear probes for each token, gradually removed the tokens with the least to most identifiable scores (for negative perturbation; vice-versa for positive perturbation), and measured the accuracy of the model with each removal.
Our results demonstrate that even if linear probes can generally decode with better top-1 accuracy the classes of the image tokens, this is not informative of the relevance of each token for the categorical decision of the network (Fig. 3). Moreover, we found that our experiments achieve higher accuracy than those reported in [12], which evidences the lack of generalizability of linear probes across training conditions (e.g. different datasets).
Finally, note that even if linear probes and our method would provide similar insights (which we demonstrated is not the case), our framework is more time-efficient: it comprises a one-forward pass on validation images, and zero-pass when projecting the parameter matrices. In contrast, linear probes additionally involve 1) a one-forward pass over the training images, and 2) the fitting of a linear classifier for every token position and layer.
We add these linear probe results to a new subsection in the main manuscript, and Fig. 3 (PDF) to the supp. material.
[1] Ridnik et al. "Imagenet-21k pretraining for the masses" (2021);
[2] Naseer et al. “On Improving Adversarial Transferability of Vision Transformers” (2021);
[3] Alain & Bengio. "Understanding intermediate layers using linear classifier probes" (2016);
Pdf: /pdf/97de559bc0464f0e542f1dde96070928855706d8.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper utilizes a pre-trained embedding matrix to elucidate the mechanism of Vision Transformers. By using the embedding matrix, inner representations at any layer can be investigated in class spaces. Specifically, it offers a visualization of how self-attention and MLP process class categorical information. This method can also be employed to visualize the saliency map.
Strengths: 1. Applying the embedding matrix is straightforward. This method does not necessitate additional training and is easy to implement.
2. Overall, the paper is well-written and well-organized
Weaknesses: 1. It appears that the main point of this paper is introducing the embedding matrix for empirical analysis in vision tasks and demonstrating the usefulness, since the embedding matrix has been previously explored in NLP tasks. However, the novelty and significance of the insights obtained by using the method are somewhat limited.
- Non-zero linear probing accuracy can be achieved even at the early layers [a].
- Self-attentions significantly change the representations, compared to MLPs [a].
- Self-attentions and MLPs perform complementary roles. For example, self-attentions aggregate information whereas MLPs diversify it [b].
2. Analyses are provided only for vanilla ViT. Since modern ViTs such as the Swin Transformer utilize global average pooling instead of a CLS token, the influence of these findings might be limited.
[a] Raghu, Maithra, et al. "Do vision transformers see like convolutional neural networks?." *NeurIPS* (2021).
[b] Park, Namuk, and Songkuk Kim. "How do vision transformers work?." ICLR (2022).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: A straightforward, and potentially more accurate, method to generate a map from token space to class space involves conducting layer-wise linear probing experiments (see, e.g., Figure 13 of [a]). When compared to linear probing, does the pre-trained embedding matrix offer any advantages? One potential advantage I can foresee is that no additional learning is required. Incorporating a comparison between linear probing and the embedding matrix might improve the manuscript.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See the Weaknesses section for technical limitations. I cannot find any ethical issue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, which has helped clarify and better demonstrate the novelty, generalizability, and usefulness of our framework and findings.
_Weakness 1_:
Thank you for sharing relevant previous work which we now cite and discuss in the revised manuscript. We note that our work provides additional and complementary insights to those mentioned in the reviewer's comment. Specifically:
- [a] shows that non-zero linear probing can be achieved in the [CLS] token, but as can be seen in Fig. 11 of their work, their linear probes did not achieve good accuracy when investigating the image tokens. In contrast, our method can successfully extract categorical information from image tokens and characterizes the factors enabling the emergence of class representations (e.g. the categorical construction in class-labeled vs context-labeled tokens). Moreover, as our general rebuttal and the “linear probing” section in this response elaborates in detail, linear probing experiments do not enable the same inferences as our method does.
- Using a Ratio of Norms analysis, [a] shows that the self-attention layers impact the residual stream more strongly than MLP layers in the [CLS] token at early layers, and the reverse is true for image tokens at later layers (MLP more than self-attention layers). Importantly, this Ratio of Norms analysis only provides an estimate of the proportion of information added to the residual stream by these layers, in comparison with that added by the skip connections. It does not characterize the amount and the mechanisms by which the information added by each layer and block updates the categorical representations of the residual, as we did in our work. The mechanistic insights that we have gained with our approach can potentially be used for model editing or performance improvements in future work.
- The results of [b] are indeed relevant to our work and are now cited in the revised manuscript. However, our findings are complementary to those reported in [b]. Similarly to [a], [b] analyzes statistical properties of the information added by self-attention and MLP layers of ViTs. In contrast, our work aims to provide insights into the semantic characteristics of the information added by these layers. Concretely, we investigate how categorical representations are built by exploiting mechanisms like key-value memory pairs that encode semantic information, which have been reported in NLP models.
More broadly, the greatest difference between our work and those mentioned in the reviewer's comment is that we introduce a general mechanistic interpretability method that can be efficiently applied to probe the categorical representations of different and future types of ViTs, as long as a class-embedding matrix has been learned during training.
In the new version of our manuscript, we explicitly clarify the novelty and significance in the “Related Work” section; specifically, how our findings provide additional insights to those mentioned in the review.
_Weakness 2_:
Thank you for pointing out this limitation, which we now overcome with additional experiments yielding favorable results. As described in detail in the general comment, we have expanded our analyses to other variants of ViTs, including those using GAP instead of the [CLS] token. This new set of experiments demonstrates that we can successfully apply our framework to other types of ViTs to gain insights into how different training variables (e.g. dataset, architecture, pooling, learning objective), affects category building mechanisms.
For example, we found that introducing GAP does not decrease the class identifiability scores of image tokens across layers (see Fig. 1 of rebuttal PDF), but it does decrease the class identifiability rate of the tokens in the last layer (i.e. the percentage of tokens that contain a class identifiability score of 1). In addition, as Fig. 2 and 3 of the PDF show, GAP training decreases the reliance on key-value memory pair mechanisms for building categorical representations. Taken together, these findings corroborate the idea that the categorical representations on GAP-based ViTs emerge in a more distributed fashion than in [CLS]-based ViTs. However, we also found that class and context tokens have significant differences in the evolution of their identifiability scores, which allows us to conclude that even in these networks there is a meaningful pattern of how identifiability emerges from the image tokens.
In summary, these new experiments show that our framework can be used to investigate modern GAP-based ViTs, which strengthens the generalizability of our work.
_Question_:
Thank you for raising this point and for suggesting linear probing experiments. We have run additional experiments accordingly, with informative and favorable outcomes. Please see the general response where we describe how we empirically demonstrate the advantages of our method over linear probing.
Here, we also want to add that the linear probe experiment reported in Figure 13 of [a] aggregates the representations of the tokens. Thus, this approach does not take into account the potential differences in class identifiability between image and [CLS] tokens, or between different kinds of image tokens (e.g. class-labeled vs context-labeled image tokens, as explored in our study). For that reason, we instead replicated the experiments of Figure 9 of [a]. Moreover, as described in the general comment, we obtained different results than those reported in Figure 9 of [a], which may be taken as evidence of a lack of generalizability across training conditions in linear probing experiments.
We benefited from the comments and suggestions, and believe we have addressed them successfully. If so, please consider raising the reviewer rating.
---
Rebuttal Comment 1.1:
Title: RE: Rebuttal by Authors
Comment: I appreciate the author's effort to further generalize the discussion and clarify its novelty. However, I still believe that the novelty is limited, and I am not fully convinced by the discussion on linear probing based on ad hoc analysis. Moreover, as reviewer vRFW pointed out, it might not be clear whether the representations are aligned. Nevertheless, as the paper has improved during this rebuttal process, I am inclined to change my recommendation to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments and for raising the rating. We respond to each in turn.
> However, I still believe that the novelty is limited,
To improve the quality of our work, we would appreciate if you could further clarify why the broad and novel contributions of our work (outlined in the rebuttal) over the studies mentioned in the review are insufficient.
> and I am not fully convinced by the discussion on linear probing based on ad hoc analysis.
Similarly, we would appreciate it if you could clarify the meaning of “based on ad hoc analysis”.
In response to the original reviewers’ comments on linear probing, we have replicated the suggested linear probe analysis of [a] and compared it with our method, reporting favorable results. Specifically, we believe our results give strong evidence that linear probing does not necessarily uncover the information driving the alignment to the class-embedding representation used in the categorical prediction, as our method does (please see rebuttal for details). Moreover, in contrast to our method, it is not possible to use linear probes to investigate the mechanisms by which categorical alignment takes place.
We would appreciate concrete, actionable feedback on how to strengthen our conclusions on this point.
> Moreover, as reviewer vRFW pointed out, it might not be clear whether the representations are aligned.
As discussed in the response to reviewer vRFW, whether representations are aligned in absolute terms or not is orthogonal to our stated research goals. We do not claim that early layers lack information because they are not aligned to class-embedding representations.
Instead, our approach aims to characterize how the alignment to the class-embedding space unfolds throughout the model’s hierarchy and the mechanisms that enable it. Thus, our approach characterizes relative alignment change by means of an identifiability measure that is continuous (as opposed to discrete measures of previous work).
Moreover, we show that the alignment process takes place from very early layers (as evidenced by the significantly increased alignment of inner representations to the class embedding as compared to a random model), so we do not disregard the importance of these early (and less aligned in absolute terms) representations with our method. | null | null | null | null | null | null |
NPCL: Neural Processes for Uncertainty-Aware Continual Learning | Accept (poster) | Summary: This paper proposes a new method for Continual Learning (CL) through Neural Processes (NPs) framework. Especially, they are inspired by MTP [1] which utilizes the global latent and task-specific latent for inter-task and intra-task knowledge representation. In this paper, the authors additionally regularize on parameter updates of MTP not too much be updated, and task-wise inference through an uncertainty quantification metric. Through this, they can achieve comparable performance with state-of-the-art prior works while maintaining smaller memory.
[1] Buzzega, Pietro, et al. "Dark experience for general continual learning: a strong, simple baseline." Advances in neural information processing systems 33 (2020): 15920-15930.
Strengths: - This paper proposes new regulations to utilize NPs for CL.
- The prior works are well summarized and they discussed their model adequately.
- They evaluated it on multiple benchmarks including class and domain incremental learning benchmarks with diverse prior works including NPs [1] and ANPs [2], and showed their model, the NPCL shows comparable performance to state-of-the-art methods such as DER [3].
- They studied diverse ablations such as their regularization losses, latent types (e.g., when only task-wise latent is learned), task-wise inference, and context sizes. In this study, they showed their task-wise inference is important for their model's performance and the uncertainty on the logits of the task heads is a good measurement for task label estimation. Additionally, they also compared their model with DER for the storage overhead. For this aspect, NPCL shows more efficient storage utilization than DER.
- They also showed the uncertainty on the task heads can be utilized to identify the novel data from out-of-distribution or not and uncertainty-based task-wise inference can show higher confidence for correct task labels.
- Lastly, they clearly discussed the limitations of their model such as more computational overhead and inference time complexity caused by multi-head attention.
[1] Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S. M., & Teh, Y. W. (2018). Neural processes. arXiv preprint arXiv:1807.01622.
[2] Kim, H., Mnih, A., Schwarz, J., Garnelo, M., Eslami, A., Rosenbaum, D., ... & Teh, Y. W. (2019). Attentive neural processes. arXiv preprint arXiv:1901.05761.
[3] Buzzega, P., Boschini, M., Porrello, A., Abati, D., & Calderara, S. (2020). Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33, 15920-15930.
Weaknesses: - The motivation is not strong enough. Their motivation to utilize NPs for CL is to measure predictive uncertainties. However, it is not clearly discussed why it is important for CL.
- Figures (Figures 1, 2, 4, and 6) are not recognizable. Especially their fonts are too small. For Figure 6, they missed labeling the axes.
- In lines 38-39, they mentioned one of their motivations is NPs can meta-learn input correlations across correlated tasks. However, in the limitation section, they analyzed it as the incompetence of dot-product attention (lines 305-306). It is the opposite analysis of their motivation.
- The equation (7) and relevant explanations are confusing. In the previous section for GR, task $t$ is dominated by the t-$th$ task samples, while it is referred to as $j$ in this section. As I understood, they tried to regularize the task $t$'s task latent on that at the step when the task $t$ arrived, but it is not easy to follow in their explanation.
- The equation (8) is hard to understand. In their explanation about the memory management of their model, the sample $x$ is not stored, but it is not in this equation (without $x$, $L_{CE}$ cannot be calculated).
- The comparison in Table 1 can cause confusion. At first glance, we can expect NPCL and DER to show comparable performance in the same memory usage, but their storage usages are quite different as shown in Appendix E.3. I can recommend showing as a plot in which the x-axis is real storage usage and the y-axis is accuracy.
- (minor) typos
- In equation (8), the bracket for $L_{GR}$ and $L_{TR}$ is missed.
- In table 2, baseline is not w/o GR *or* TR, but GR *and* TR.
- In line 208, with. -> with
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: - Can you additionally analyze why small $N$ is not working well with the additional experimental results for diverse $N$ values?
- In your design, the sum pooling is used to get the global and task-specific latent. Could you replace it with Multi-head Attention Pooling (MAP) [1] (e.g., using the learnable additional token for the latent) and test it for a few tasks?
[1] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019, May). Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors adequately discussed the limitations of their work. Thanks for their detailed analysis of their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and remarks on our work. In regard to the questions raised, we provide our explanations below:
- **Motivation**: Our motivation to utilize NPs for CL is two-fold. First, based on lines 20-26, we aim to design parameter isolation methods for CL that can better: (a) exploit task-specific and inter-task correlations, and (b) infer which task-specific component to utilize for test samples in the lack of a given task id. Second, we are interested in CL models that are reliable in their predictive confidence. This latter goal is particularly useful for the real-world deployment of models where changing data distributions can interfere with a model's interpretability of its predictions. We thus wish to know when to rely on a model’s predictions and when to call for human/expert interventions. We will elaborate further on our motivation in the next version of our paper.
- **Lines 38-39**: Yes, it is true that NPs can meta-learn input correlations across correlated tasks. However, the degree of the learned correlations can vary with the type of inductive bias used by the model [1]. For instance, the attentive NPs using dot-product attention capture better correlations than the vanilla NPs. Nevertheless, dot-product attention, like other optimization algorithms, suffers from its own limitations [2]. Rather than an opposite analysis of our motivation, we thus suggest that lines 305-306 be read as an implication of the no-free-lunch theorem for supervised learning [3].
- **On Eq. (7)**: We apologize for the confusion and provide further explanation on the notation here. In line 165, the task $t$ indeed refers to the current training task whose samples dominate the training set (given that the replay memory $\mathcal{M}$ contains a much smaller number of samples per past task). On line 172, by task $t$, we actually refer to the past tasks for which we are regularizing the learned distribution (learned from $\mathcal{M}$) towards the distribution that we had learned when the past task was first seen, *i.e.,* the step $0 <= j < t$. To retrieve the latter past task distribution, we rely on the distribution memory $\mathcal{M}_\mathcal{N}$ (line 173).
- **On Eq. (8)**: We believe there has been a misunderstanding regarding how the replay memory $\mathcal{M}$ works. $\mathcal{M}$ does store the sample $x$. We state this very clearly in lines 107-109, and also in appendix E.3 (lines 624-625). It is in addition to this memory $\mathcal{M}$ that NPCL stores the *separate* distribution memory (line 173). Given that $x$ is available from $\mathcal{M}$, $\mathcal{L}_{CE}$ can thus be calculated.
- **Table 1**: Following the previous point, we reiterate the fact that both NPCL and DER store the original sample $x$ in the replay memory $\mathcal{M}$ (lines 107-109; Appendix E.3 lines 624-625). The difference in memory usage is that while DER relies on storing the logits for the individual samples $x$, NPCL instead stores the parameters of the global and task-specific distributions as well as the scalar task id of $x$. Given that both NPCL and DER store sample $x$, we do not take these into account for our comparison of storage gain (lines 624-625). We nevertheless agree with the idea of a plot depicting the real storage usage and plan to incorporate this.
- **Analysis of $N$ values**: Please refer to the global response and the Fig. 3(a) in the attached PDF for the analysis. Intuitively, the hierarchy in NCPL implies that the $M$ Monte Carlo (MC) samples of different task-specific distributions are conditioned on the same set of $N$ MC samples of the global distribution (see Fig. 3(b) for a rough sketch of this). With smaller $N$, we restrict the diversity of the samples $z^G$ of the global distribution that the task-specific encoders should look at. This is equivalent to limiting the support of the global distribution. We know that different regions of the global distribution could capture different CL tasks. Smaller $N$ values are thus likely to condition a task $t$'s encoder on a global sample $z^G$ that represents a region corresponding to another task that is less correlated to $t$. Similarly, with smaller $N$, we are also more likely to miss out on conditioning the $t-th$ encoder on samples from regions of the global distribution that capture tasks having a high correlation with $t$.
- **Multi-head Attention Pooling (MAP)**: Per your advice, we experiment with using Multi-Head Attention Pooling (MAP) [4] as a possible alternative to the global average pooling. The results of NPCL with MAP on S-CIFAR-10 with a memory size of 200 are shown below. In particular, we observe that while MAP does lead to a slight improvement of accuracy on the first task, the additional number of learnable parameters inadvertently causes higher forgetting on subsequent incremental tasks. We thus leave the exploration of MAPs for NPs in CL as a potential direction for future work.
| Method | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |
|---------------|--------|--------|--------|--------|--------|
| NPCL | 98.3 | 86.12 | 74.85 | 69.48 | 63.78 |
| NPCL w/ MAP | 98.46 | 80.43 | 68.2 | 59.74 | 51.77 |
**References**:
[1] Jha, S., Gong, D., Wang, X., Turner, R.E., & Yao, L. (2022). The Neural Process Family: Survey, Applications and Perspectives. *ArXiv, abs/2209.00517*.
[2] Kim, M., Go, K., & Yun, S. (2022). Neural Processes with Stochastic Attention: Paying more attention to the context dataset. *ICLR*.
[3] Goldblum, M., Finzi, M., Rowan, K., & Wilson, A.G. (2023). The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning. *ArXiv, abs/2304.05366*.
[4] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019, May). Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). PMLR.
---
Rebuttal Comment 1.1:
Title: Reply to the response of the author
Comment: Thank you for your response to my concerns. I think they properly discussed them.
+Can you update the figures? They are hard to understand due to very small fonts and missed x and y axis labels.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 1ufQ,
Thank you for taking the time to review our rebuttal. We have considered your feedback regarding the figures and plan to update our manuscript at the earliest allowed revision.
Let us know if you have any other questions or concerns. Also, if you are satisfied with our answers, please consider revising your score.
With best regards | Summary: This paper suggests tackling continual learning (CL) with a neural process model (NP). The authors describe a hierarchical latent variable model, with one global latent, and t per-task latents. As is common in NPs, the latent posteriors are a function of a context of datapoints with their corresponding labels, and then a decoder maps the latents to an output prediction.
During training time, the model is trained with access to task labels, and uses data from different tasks to train the corresponding task-specific latent posteriors. A replay buffer stores and reuses data from old tasks during training, and additional regularization terms are introduced, in order to minimize effects of drifting away from previous tasks ('forgetting').
At test time, there is no access to task labels, and therefore the model is run for all tasks, and the output with the least entropy is used.
The model is compared to various CL approaches and is shown to achieve and sometimes surpass the state of the art. The paper describes more ablations and analysis of the results.
Strengths: The main strengths of this paper are:
1. An interesting model with non-trivial components put together:
- the hierarchical structure of the latents
- the difference treatment of training time and test time with respect to access to context latent
2. The result seems to surpass previous approaches
3. Extensive analysis of the results
Weaknesses: The main weakness of this paper is that modeling continual learning as meta learning via neural processes is not exactly valid.
Specifically, NPs assume that the dataset contains many different functions or tasks, and one of the main challenges is that the task id is not accessible and has to be inferred. This is the reason a "context" of labeled points is used - to infer the task at hand.
In the setup described in the paper, it is assumed that the task id is accessible at training time, which allows designating different parts of the model to different tasks.
At test time, the model uses a context coming from the training data with little overlap to the task of the test data. This is a major difference from NPs.
I think that the presented model is interesting and can have a significant contribution to the community but that the good performance exhibited is not connected to NPs as stated, but rather merely a result of the higher capacity of hierarchical latent variable models.
Since the model is conditioned on training points during evaluation, perhaps a better way to present it is as a non-parametric method (e.g.[1]) rather than a neural process.
[1] Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning. Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, Yarin Gal
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: 1. The paragraph in line 110 is not clear. What does it mean to optimize parameters of $\mathcal{D}^t$?
2. In equation 3, does the context define the task (as modeled in standard NPs)? If so why are there superscripts of all the tasks 0:t?
3. In equation 4 why are $z^t$ conditioned on the contexts $\mathcal{C}^t$? If the task id is given then the context doesn’t have any information.
4. The number of samples described in line 151 is not clear. If every point is mapped to one task then only the corresponding latent needs to be sampled, making the number of samples M. Is this wrong?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: I did not find any unaddressed limitations or potential negative impact in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and remarks on our work. In regard to the questions raised, we provide our explanations below:
- **Validity of NPs for CL**: We believe that the remark against the validity of our setting stems from a misunderstanding between the standard meta-learning (ML) settings (that NPs were initially designed for) and the continual learning (CL) setting (ours). Namely, in ML testing, we are interested in how the trained model performs on new tasks, each of which is associated with a
dataset $\mathcal{D}$ specified for one task at a time. $\mathcal{D}$ thus has a support/context set and a query set. As also mentioned in the review, the applications of NPs to ML keep the task id inaccessible such that the model learns to infer it from the context. This is because what defines a new task in ML is just the context points.
However, in CL with a replay memory, our training and testing set could both comprise samples from all the tasks. That is to say, the testing phase of CL does not have any new tasks, which is the fundamental difference from the ML problem setting. Therefore, we believe that it is valid that our training and test data use the same set of context points. Consequently, it is also valid for us to access the task IDs of the context set points to derive more informative task-specific priors.
In practice, going beyond the ML setting, there have been several methods in the NP literature that use task id for *designating different parts of an NP model to different tasks* for multi-task learning [1,2] and *using the training samples as context at test time* for large-scale classification tasks [3,4].
**Therefore, we believe that our use of NP for CL is valid and sincerely hope the reviewer reconsiders their rating.**
- **Effect of context on the performance of NPCL**: To verify that the access to task ids make the task-specific context $\mathcal{C}^t$ more informative, we design a simple experiment. While training on an incremental task $t > 0$ onwards, after having derived the global distribution samples $z^G$, we tinker with the flow of the task-specific context points $C^t$ to the different task-specific components of the NPCL. Namely, instead of directing $\mathcal{C}^t$ to its *correct* $t-th$ task encoder, we redirect it to the $j-th$ task encoder where, $j \neq t$ and $j$ are chosen at random from the pool of all seen task ids. Note that the presence of such randomly allocated context points during training implies that we now have a noisy task-specific prior, *i.e.,* $q_\phi(z^t$|$z^G, \mathcal{C}^j)$ to match in the second term of the right-hand side of our ELBO, *i.e.,* eq. 5 in the main paper. We keep all our other training settings (including the loss coefficient values) unchanged.
Figure 4 in the global response PDF compares the performance of NPCL with the noisy task-specific priors on the two different memory sizes of S-CIFAR-10. While the performance gain of the NPCL over its noisy prior counterpart remains significant, we observe that in comparison with the ST-NPCL (which *lacks* hierarchy), the presence of noisy priors degrade the performance of NPCL (which *has* hierarchy) further as the replay memory size increases from 200 to 500. This is because, with a larger memory size, more context points from past tasks are diverted to the random task components during training. This leads to a noisier task-specific prior matching. Such noisy priors further lead to higher fluctuations in the accuracy, as marked by the larger standard deviations in their accuracy over ST-NPCL and NPCL. *This validates the fact that the performance of NPCL is connected to the conditioning on correct task-specific context and not merely on the higher capacity of the hierarchical latent variable model.*
- **Line 110**: In line 110, we mention optimizing parameters *on* $\mathcal{D}^t$ rather than *of* $\mathcal{D}^t$. Here, $\mathcal{D}^t$ refers to the training samples from task $t$ (building upon the notation from lines $93-94$ and $103$ in main paper). By joint optimization on $\mathcal{D}^t$ and the memory $\mathcal{M}$, we mean that if the entire parametric space of the model is allocated to learning $\mathcal{D}^t$ and $\mathcal{M}$, then interference from the future tasks is more likely.
- **Superscripts in eq. 3**: As mentioned earlier, in a standard image classification setting like ours, the context points are not the only definition of a task given that the training and testing data in such settings can come from multiple tasks simultaneously. We thus use the superscripts $0:t$ to differentiate the task-specific context $C^{0:t}$ and target $X^{0:t}$. For the same reason, $z^t$ is conditioned on $C^t$.
- **Line 151**: We are sorry for the confusion here. Conditioned on each $z \in $ {$z^G$}$_1^N$, we rely on sampling one set of {$z^t$}$_1^M$ from each task-specific NPCL distribution. Thus, for $M=1$, we have $N$ samples from each of the $t+1$ task-specific encoder, leading to a subtotal of $N \times (t + 1)$ latents. Further, since $z^t$ for different tasks are sampled from distributions learned by different task encoders, this does not imply mapping every point to only one task. For a more clear explanation, we provide a rough sketch of our hierarchical sampling framework in Figure 3(b) of the attached PDF. Also, in Figure 3(a), we shed light on the effect of different $M$ and $N$ values on the accuracy of NPCL.
**References**:
[1] Kim, D., Cho, S., Lee, W., & Hong, S. (2022). Multi-Task Processes. *ICLR*.
[2] Shen, J., Zhen, X., Worring, M., & Shao, L. (2021). Multi-Task Neural Processes. ArXiv, abs/2111.05820.
[3] Wang, J., Lukasiewicz, T., Massiceti, D., Hu, X., Pavlovic, V., & Neophytou, A. (2022). NP-Match: When Neural Processes meet Semi-Supervised Learning. *ICML*.
[4] Jung, M.C., Zhao, H., Dipnall, J.F., Gabbe, B.J., & Du, L. (2023). Multimodal Neural Processes for Uncertainty Estimation. ArXiv, abs/2304.01518.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer v3Yx,
We thank you again for taking your time reviewing this work. We put our best efforts to prepare the rebuttal to your questions. We would very much appreciate if you could engage with us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any concerns.
Also, if you are satisfied with our answers, please consider revising your score.
With best regards | Summary: This paper presents an uncertainty-aware continual learning framework that utilizes Neural Processes (NPs). The NP model employs a hierarchical latent variable model in conjunction with an experience replay buffer, where a global latent variable captures inter-task correlations and task-specific latents encode more detailed knowledge. To prevent catastrophic forgetting, the method regularizes a Jenson-Shannon divergence between current and past distributions of latent variables. The paper employs entropy for uncertainty quantification.
Strengths: - The concept of regularizing latent distributions between preceding and current tasks is both simple and intuitive.
- Experimental results demonstrate that the proposed approach is competitive with state-of-the-art continual learning methods, while also being capable of quantifying model uncertainty.
- The proposed method only stores two vectors for each task, showcasing memory efficiency in terms of experience replay.
- Generally, the paper is well-written and easy to understand.
Weaknesses: - Given that the encoder includes self-attention and cross-attention layers, the proposed method exhibits quadratic complexity relative to the training data. This complexity could prove problematic when the training dataset is substantial (e.g., 10000).
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: - Why was Jensen-Shannon Divergence (JSD) chosen as a regularization metric? Have other distribution divergence metrics, such as Kullback-Leibler Divergence (KLD), been explored?
- A key strength of the NP family is its few-shot learning capability. Given that the size of the replay buffer can be a limiting factor in some applications, would the proposed approach still be effective with a much smaller buffer size (e.g., 10)?
- How do accuracy and uncertainty depend on the size of the Monte Carlo sampling (N, M)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and remarks on our work. In regard to the questions raised, we provide our explanations below:
- **Choice of regularization metrics**: Thank you for your suggestion. We indeed consider KL-divergence as a possible alternative for our regularization metric. The table below reports the average accuracy of NPCL on S-CIFAR-10 (memory size = 200) and S-CIFAR-100 (memory size = 500) using JS and KL divergences as the regularization metrics. Overall, JS-divergence offers consistent gains over its asymmetric KL counterpart in both settings. This could be because KL-divergence induces harder constraints on the distributions of the global and task-specific posteriors in matching their respective past task priors. However, in CL, the evolving data distribution means that our global and task-specific posteriors keep changing as the model’s parameters adapt to fit the newly arrived data. Therefore, the symmetric JS-divergence, which has a relatively relaxed mid-point anchor between the current posterior and its past task prior to match, offers a better trade-off between the plasticity and stability of the model. Our observation is in line with previous work on NPs that use JS divergence as a superior choice for the prior matching loss term [1].
| NPCL | S-CIFAR-10 | S-CIFAR-100 |
|---------------|---------------------|----------------------|
| JS-divergence | 63.78 +/- 1.7 | 37.43 |
| KL-divergence | 62.55 +/- 1.58 | 35.82 |
- **Few-shot replay setting**: Thanks for your suggestion. To evaluate the performance of NPCL in a few-shot replay setting, we report the accuracy of NPCL (ours), ER [2] and DER [3] on S-CIFAR-10 and S-CIFAR-100 with memory sizes of 5 and 10, respectively (see Fig. 2 in the global response PDF). Further, as pointed out by reviewer **qdzJ**, we also compare the expected calibration errors (ECE) of these models to study how well their predicted probabilities reflect the true likelihood of the labels.
As shown in Figure 2 of the PDF in the global response, NPCL consistently outperforms ER and DER in both memory size settings in terms of accuracy (higher the better) and ECE (lower the better). Interestingly, on the memory size of 5 on S-CIFAR-100, we observe that while the ER outperforms the DER in terms of accuracy, the latter still offers more confident predictions characterized by a lower ECE than that of the ER. NPCL, on the other hand, still exhibits the lowest ECE in this setting. This observation depicts that on very small CL memory settings, while regularizing the predicted logits towards their old forms – as done by the DER – helps maintain better predictive confidence than the simple ER, regularizing the task distributions towards their old forms – as done by the NPCL – remains the superior way to enhance the model’s predictive confidence.
- **Effect of Monte Carlo samples M and N**: Please refer to our global response alongside Figure 3(a) of the attached PDF.
- **Quadratic complexity of NPCL relative to the training data**: This is correct. One possible solution for improving the quadratic complexity of the current implementation could be to replace the self-attention and cross-attention blocks as well as the global average pooling operations entirely with Multi-Head Attention Pooling (MAP) [4]. In particular, the inducing points variant of the MAP could be used to summarize the set of $n$ input points with $m$ inducing points where $m << n$. This way, the time complexity could be brought down from $\mathcal{O}(n^2)$ to $\mathcal{O}(nm)$ where $n$ is the size of the input context/target set and $m << n$ is the number of inducing points encoding the original $n$ points.
We have currently performed a preliminary experiment with MAP where we replace only the global average pooling operations with MAP (please refer to our response to reviewer **1ufQ**). Per the advice from reviewer **1ufQ**, the aforesaid experiment was done from the viewpoint of improving accuracy. We thus plan to explore MAPs for improving the computational complexity of NPs in CL as a potential direction for future work.
**References**:
[1] Wang, J., Lukasiewicz, T., Massiceti, D., Hu, X., Pavlovic, V., & Neophytou, A. (2022). NP-Match: When Neural Processes meet Semi-Supervised Learning. *ICML*.
[2] Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., & Tesauro, G. (2018). Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference. *ICLR*.
[3] Buzzega, P., Boschini, M., Porrello, A., Abati, D., & Calderara, S. (2020). Dark Experience for General Continual Learning: a Strong, Simple Baseline. *NeurIPS*.
[4] Lee, J., Lee, Y., Kim, J., Kosiorek, A., Choi, S., & Teh, Y. W. (2019, May). Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning (pp. 3744-3753). *PMLR*.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fvgr,
We thank you again for taking your time reviewing this work. We put our best efforts to prepare the rebuttal to your questions. We would very much appreciate if you could engage with us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any concerns.
With best regards | Summary: This paper introduces an uncertainty-aware continual learning framework based on neural processes. The proposed method casts CL into a hierarchical latent model from the global variables to the task-specific variables, and the corresponding regularization terms for each to ensure minimum forgetting along the course of new task learning. The naturally reliable uncertain estimation of NP facilitates class incremental setting by allowing accurate task-specific latent selections.
Strengths: - The presentation of this paper is mostly clear, with the equations properly delivering the ideas.
- The idea of applying NP in CL, explicitly modeling task-specific latent variables, and uncertainty quantification for class-incremental CL are very intriguing.
Weaknesses: **Writing and clarity**
The writing can be further improved.
For example, in my understanding, following Line 163, the two regularization terms are introduced to counter the forgetting instead of being 'two key aspects of forgetting in NPCL.'
In line 114, the authors mentioned 'uses generative factors', while the term 'generative factors' is not appropriately explained here or in the following sections.
**Quantitative comparisons**
While clear improvements over some baseline methods and other NP-based methods are reported, I believe the compared CL methods are all relatively classic but old. More up-to-date might be helpful to better position the proposed method.
Other minor points:
- Figure 2 is too small
- The authors might consider refining the formats of the reference list, as currently the formats are very inconsistent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please see the Weakness section.
How is the uncertainty estimation of the proposed method quantitatively? Some further evaluation using metrics like ECE can be more insightful.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors discussed the limitations of the proposed method very comprehensively in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and remarks on our work. In regard to the questions raised, we provide our explanations below:
- **Writing, figures and referencing formats**: Thank you for pointing these out. We will be incorporating these into our paper.
- **CL Baselines**: Thank you for your suggestion. We agree that CL is a fast-moving field and many new works are emerging. The CL baselines opted by us have been frequently used in a range of other recent CL papers [1-4]. We thus use these clean and powerful baseline models to clearly validate the effectiveness of the proposed NP-based model in CL. We will expand the group of methods used in our comparison.
- **Quantitative results for uncertainty**: Thank you for your suggestion. We agree with the idea of quantitative uncertainty comparisons, and following your suggestion, we report the calibration error of NPCL with other methods to compare how well their predicted probabilities reflect the true probabilities. In particular, we report the Expected Calibration Error (ECE) [5] and the more robust Adaptive Calibration Error (ACE) [6] of ER, DER, a baseline Attentive NP (ANP) [7], and NPCL (ours) on the S-CIFAR-10 and the S-CIFAR-100 settings with memory sizes 200 and 500, respectively (see *Figure 1* in the pdf attached with the *global* response). ECE and ACE are widely-used metrics for uncertainty estimation. As shown in Figure 1 in the attached pdf, NPCL consistently produces the most well-calibrated output probabilities. Moreover, even the baseline ANP, whose accuracy scores are comparable with that of the ER (see Table 1 in the main paper), produces predictions with lower calibration errors than the ER method.
**References**:
[1] Kim, S., Noci, L., Orvieto, A., & Hofmann, T. (2023). Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning. *CVPR*.
[2] Sun, Z., Mu, Y., & Hua, G. (2023). Regularizing Second-Order Influences for Continual Learning. *CVPR*.
[3] Boschini, M., Bonicelli, L., Buzzega, P., Porrello, A., & Calderara, S. (2022). Class-Incremental Continual Learning Into the eXtended DER-Verse. *IEEE TPAMI*, 45, 5497-5512.
[4] Gong, D., Yan, Q., Liu, Y., Hengel, A.V., & Shi, J. (2022). Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning. *CVPR*, 109-118.
[5] Guo, C., Pleiss, G., Sun, Y., & Weinberger, K.Q. (2017). On Calibration of Modern Neural Networks. *ICML*.
[6] Nixon, J., Dusenberry, M.W., Zhang, L., Jerfel, G., & Tran, D. (2019). Measuring Calibration in Deep Learning. *CVPR Workshops*.
[7] Kim, H., Mnih, A., Schwarz, J., Garnelo, M., Eslami, S.M., Rosenbaum, D., Vinyals, O., & Teh, Y.W. (2019). Attentive Neural Processes. *ICLR*.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer qdzJ,
We thank you again for taking your time reviewing this work. We put our best efforts to prepare the rebuttal to your questions. We would very much appreciate if you could engage with us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any concerns.
Also, if you are satisfied with our answers, please consider revising your score.
With best regards | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful comments and suggestions. Here we provide a single-page PDF that contains the figures and tables that we want the reviewers to see while considering our responses. We look forward to a helpful discussion period.
- **Effect of Monte Carlo samples on performance**: As asked by reviewers **fvgr** and **1ufQ**, we discuss the effect of the number of Monte Carlo (MC) samples for global *N* and task-specific latent variables *M* on accuracy during inference (please refer to the heatmap in Figure 3(a) of the attached PDF). In particular, we observe two favorable spots in terms of accuracy, one centered around $(M=1, N=50)$ and the other around $(M=10, N=20)$. It is worth noting that the total number of inference time MC samples grow quadratically with the number of tasks ‘t’, i.e., $\mathcal{O}$($N \times M \times t$), and that a higher number of samples leads to a larger computational overhead. For instance, based on eq. (9) in the main paper, the inference on the 10th task of S-CIFAR-100, i.e., $t = 10$, given the two favorable $M$ and $N$ values amounts to selecting the set of task-specific module predictions with the least uncertainty from a total of (a) $1 \times 50 \times 10 = 500$ predictions using $(M=1, N=50)$, and (b) $10 \times 20 \times 10 = 2000$ predictions using $(M=10, N=20)$. We, therefore, opt for the more efficient setting of $(M=1, N=50)$ throughout our experiments for all settings.
Pdf: /pdf/9258d53b9331423698cde65da0d83ca0760dc1fd.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
$p$-value Adjustment for Monotonous, Unbiased, and Fast Clustering Comparison | Accept (poster) | Summary: p-value Adjustment. The p-value adjusted Rand Index is unbiased. The authors claim that its approximations outperform STD Mutual Information.
First, generalized MI relies on the TSALLIS entropy (same family of RENYI). AMIq comes from subtracting the expectation under random permutations. PMIq is derived and it is monotonic for q>=2 where q=2 is the Rand Index. Monotonicity is due to the properties of the bypass TSALLIS entropy.
Strengths: This is a general tool for testing clustering algorithms. The properties of bypass entropies such as Tsallis are leveraged.
Computational complexity is quadratic.
Weaknesses: It is quite theoretical and general. Poor experiments. Seems more a statistical paper. Please, motivate the use of bypass entropy estimators. It is more than a matter of efficiency. Other formal properties are also key (e.g. monotonicity).
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How critical is the choice of the Tsallis entropy? What about the Kozackenko-Leonenko/Kraskow et al.’s approach? In ML there is an intererest in Rényi entropies and KNN graphs. Can you say anything about the properties of the graph associated with the clustering table.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper is general but few baselines are explored. Should be nice to test with segmented images or outputs of spectral clustering where there is an implicit bias. In other words, in my opinion the paper needs to tackle more recent datasets used in deep learning and image analysis to approach this result to the NEURIPS community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and constructive feedback. We have taken all comments into consideration and summarize our response as follows:
1. **Experiments with segmented images or spectral clustering**
We conducted two additional experiments using spectral clustering, on an image segmentation dataset [5] and a texture classification dataset [6]. The results are presented in Figure 1 in the attached PDF and confirm the experiments in Figure 4 in the manuscript. As the theoretical aspects of the $\\operatorname{PMI}\_2$ are the main focus of this work, we prioritize having the comparison to other clustering comparison metrics in the main body (See Reviewer heZh, PDF). The experiments on spectral clustering will be included in the appendix, due to the similarity with Figure 4 and the page limit.
---
2. **Choice of Tsallis entropy over Rényi entropy**
The Tsallis and Rényi entropy are distinct mathematical concepts that both generalize the Shannon entropy. The Tsallis entropy replaces the logarithm with a modified $q$-logarithm (See Definition 2.1). This $q$-logarithm creates a link between information-theoretic and pair-counting measures [1]. For example, the adjusted Tsallis mutual information for $q=2$ is identical to the adjusted Rand Index. The Rényi entropy retains the conventional logarithm, such that this link to pair counting measures is not possible. We rephrased lines 29f. and 86f. to further emphasize this point and refer to [1] for an in-depth discussion.
---
3. **Mutual information estimation (Kozachenko-Leonenko/Kraskow et al., nearest-neighbor graphs)**
We assume you are referring to approaches to estimate mutual information of two *continuous* random variables, given a finite number of samples [2,3]. In this work, we focus on evaluating clusterings that consist of a *finite* number of clusters. Hence the cluster assignment can be understood as *discrete* random variable. Therefore the approach in [2,3] and KNN graph approaches like [4] are not applicable.
---
4. **Properties of the graph associated with the clustering table**
We assume you refer to the contingency table of two clusterings $A$ and $B$ (Table 1). Note that the contingency table is not necessarily a square matrix, such that it can be seen as a block in the adjacency matrix of a bipartite graph at best. The nodes on one side of this bipartite graph would represent the clusters in clustering $A$ and on the other clusters in clustering $B$. Edges would represent intersections between the clusters with the edge weight equal to the size of the intersection. While this is an interesting perspective on contingency tables, it is unclear to us how this contributes to the main content of our work, the $\\operatorname{PMI}\_2$.
[1] Romano, S., Vinh, N. X., Bailey, J., & Verspoor, K. (2016). Adjusting for chance clustering comparison measures. _The Journal of Machine Learning Research_, _17_(1), 4635-4666.
[2] Kozachenko, L. F., & Leonenko, N. N. (1987). Sample estimate of the entropy of a random vector. _Problemy Peredachi Informatsii_, _23_(2), 9-16.
[3] Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. _Physical review E_, _69_(6), 066138.
[4] Pál, D., Póczos, B., & Szepesvári, C. (2010). Estimation of Rényi entropy and mutual information based on generalized nearest-neighbor graphs. _Advances in Neural Information Processing Systems_, _23_.
[5] Image Segmentation. (1990). UCI Machine Learning Repository. https://doi.org/10.24432/C5GP4N.
[6] Brodatz, P. (1966). Textures: A Photographic Album for Artists and Designers. *Dover Publications,Inc.*, New York.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for doing the additional experiments and to answer the questions. I upgrade my rating.
Thanks a lot. | Summary: The article introduces a new measurement called $\text{PMI}_q$ for comparing clustering methods. The name "p-value adjustment" comes from $\text{PMI}_1$, which represents the p-value of the mutual information's variation. This new metric has desirable properties, including type II unbiasedness and monotonicity. The paper claims that $\text{PMI}_q, q\ge 2$ is the first clustering comparison method to satisfy both of these properties.
The definition of $\text{PMI}_q$ involves taking an expectation over all permutations in $S_N$, which can be computationally challenging. To address this challenge, the author proposes a Monte Carlo estimation method for $\text{PMI}_2$ and applies it to empirical experiments.
The paper's contribution lies in proposing a new unbiased and provably monotonic method for addressing the existing bias problem in current clustering comparison metrics. The author provides detailed and sufficient proofs to support the proposed method.
The paper's limitations include its disorganized structure and lack of clarity in some sections. The article's figures and tables need improvement, and the author does not explicitly address any limitations or potential future research directions.
Strengths: 1. This paper provides valuable ideas and techniques for improving cluster comparison, particularly a new unbiased and provably monotonic method for addressing the existing bias problem, which may be useful to practitioners and researchers in the field of cluster comparison.
2.Detailed and sufficient proofs give relatively clear theoretical and technical support on new methods.
Weaknesses: 1. The article's structure is somewhat disorganized and can be challenging to read. To improve clarity, the authors could provide more examples and intuitive explanations for Definitions 3.1 to 3.4 and 4.1 to 4.4.
2. It would be beneficial to compare the proposed clustering comparison metric with other commonly used metrics in the field. By doing so, the authors could inform readers about whether these metrics possess desirable properties such as type II unbiasedness and monotonicity.
3. The figures and tables in the article need improvement, as some elements are unclear or difficult to read. For example, Figure 3a has too a large legend.
4. The results in Figure 2(c) could be more effectively presented to allow for easier differentiation across different N values. The authors may want to explore alternative visual representations or labeling techniques to better convey this information.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. Is $\text{PMI}_q$ type I unbiased?
2. Section 6 is unclear on how to calculate $k_{pred}$. Although $\text{PMI}$ can compare two clusterings, it is unclear which clusterings are being compared in the experiments discussed in Section 6. As such, further clarification is needed to understand how $k_{pred}$ is computed in this section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: 1. The paper does not explicitly address any limitations or potential avenues for future research.
2. While the author asserts that $\text{PMI}_2$ is the first clustering comparison method to satisfy both type II unbiasedness and monotonicity, it is challenging to evaluate the significance of this claim without further comparison.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and constructive feedback. We have taken all comments into consideration and summarize our response as follows:
1. **Is $\\operatorname{PMI}\_q$ type I unbiased?**
Yes. Type I unbiased means that when you compare a fixed clustering $A$ to all permutations of any clustering $B$, the average metric value is the same for all $A$. The $p$-value of the mutual information ($\\operatorname{PMI}\_q$) tells you what percentage of all permutations of the data points would have led to higher mutual information. Now, if we average that percentage over all permutations we get a constant 50%. Formally we use Eq. (15) from Appendix A with with $A,B,\\tilde{B},\\mathcal{B}$ as in Appendix A to get $\\mathbb{E}\_\\mathit{B\\sim\\mathcal{B}}[\\operatorname{PMI}\_q(A,B)]=1-\\mathbb{E}\_\\mathit{\\tilde{B}\\sim\\mathcal{B}}[\\operatorname{PMI}\_q(A,\\tilde{B})]=1/2$. We added the proposition to the main body and a short proof to the appendix.
---
2. **What is $k\_\\text{pred}$ in Section 6/Figure 4?**
Given a dataset with $k\_\\text{true}=10$ ground-truth clusters, we apply $k$-means clustering for different $k\_\\text{pred}\\in \\{10, 11, 12, 13, 14, 15 \\}$. We compare the $k$-means results with the ground truth and get for example $\\operatorname{PMI}\_2$ values of $[0.1, 0.3, 0.4, 0.2, 0.3, 0.1]$. In this case $k\_\\text{pred}=12$ had the highest score of $0.4$ and is selected. We repeat this experiment multiple times with different initializations of $k$-means and plot how often each $k\_\\text{pred}$ was selected in Figure 4a. To stress this selection step, we changed the axis labels to $k\_\\text{selected}$. We rephrased the Figure caption and extended the explanation of the experimental setup with examples in the main text.
---
3. **The paper does not explicitly address limitations or future directions**
A limitation of the $\\operatorname{PMI}$ is its computational complexity. We provide an efficient standardized approximation to address this limitation. However, strictly speaking, the standardized approximation loses the theoretical guarantees of the $\\operatorname{PMI}$. We mitigate this limitation by also introducing a Monte Carlo approximation that retains the theoretical guarantees up to a tunable approximation error at the cost of higher runtime. Whether a weaker formulation of monotonicity can be found that carries over to the standardized approximation is an exciting direction for future research. Another open question is whether other metrics like the Jaccard Index could benefit from $p$-value adjustment and how it affects their monotonicity.
We changed the conclusion to state those limitations explicitly. We also added the discussion about future avenues of research.
---
4. **Further comparison to other methods**
We added a table comparing a total of 19 clustering comparison metrics. See comment to Reviewer heZh and PDF attached.
---
5. **Clarity and figures**
We added intuitions for Definition 3.3, 3.2 and 3.4 (See Reviewer comment dKDt and the intuition for the PMI and type I bias in 1. of this comment). For a better understanding of Definitions 4.1 and 4.2 we added a reference to Definition 4 and Theorem 2 in [1], to stay within the page limit. We adjusted the legend size in Figure 3a and reworked the caption of Figure 4 (See 2. this comment). We changed the presentation of Figure 2c, see PDF.
[1] Gösgens, M. M., Tikhonov, A., & Prokhorenkova, L. (2021, July). Systematic analysis of cluster similarity indices: How to validate validation measures. In _International Conference on Machine Learning_ (pp. 3799-3808). PMLR.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I appreciate your response, and it is clear and logical to me. However, as I am not an expert in this particular field, I will maintain the score of "borderline accept". | Summary: The paper presents a performance measurement method for cluster analysis. The method can avoid Type II bias that exists in previous approaches. A tractable approximation is given. The proposed method demonstrates advantages in both synthetic and real-world data sets.
Strengths: The work consists of solid theoretical contributions and satisfactory empirical results. The finding is an important step forward in clustering research.
Weaknesses: More empirical results would make the work more convincing.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: * It is better to elaborate the importance of Type II unbiasedness. Specifically, explaining Definition 3.3 in common words help readers understand the concepts and your contribution.
* There could be a mistake in Proposition 5.2. In the current formula, a higher accuracy leads to lower complexity, while a zero accuracy gives infinity.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and constructive feedback. We have taken all comments into consideration and summarize our response as follows:
1. **Further explanation of Type II unbiasedness**
Type I bias means that certain cluster sizes receive higher metric values. Type II bias gives higher relative rank to certain cluster sizes, when multiple clusterings are compared with a ground truth. We included this explanation to help illustrate Definition 3.3.
---
2. **Definition of accuracy**
An accuracy $a=0.1$ means the error on the quantity is below $0.1$, a lower value of $a=0.01$ is thus more accurate. We eliminate the confusion by renaming "accuracy" to "approximation error".
---
3. **More empirical results would make the work more convincing**
We added further experiments using spectral clustering on an image segmentation and texture classification dataset see comment to Reviewer wdxj and attached PDF.
---
Rebuttal Comment 1.1:
Comment: The author's responses are satisfactory. | Summary: The paper introduces a new method called the p-value adjusted Rand Index (PMI2) for comparing clustering and community detection algorithms. The paper highlights the limitations of existing metrics, such as the Rand Index and Adjusted Rand Index, which suffer from bias and non-monotonicity issues. The PMI2 method addresses these issues by providing a type II unbiased and provably monotonic metric that has fast approximations. The paper also provides experimental results on image and social network datasets to demonstrate the effectiveness of the PMI2 method.
Strengths: Originality: The paper introduces a new clustering comparison metric, the p-value adjusted Rand Index (PMI2), which addresses the limitations of existing metrics. The PMI2 method is the first to be type II unbiased and provably monotonic, and it has fast approximations.
Quality: The paper provides a thorough analysis of the limitations of existing clustering comparison metrics and demonstrates the effectiveness of the PMI2 method through experiments on synthetic benchmarks, image, and social network datasets. The paper also provides theoretical proofs of the PMI2 method's properties.
Clarity: The paper is well-organized and clearly presents the motivation, background, methodology, and experimental results of the PMI2 method. The paper also provides detailed explanations of the theoretical proofs and the approximations used in the method.
Significance: The paper's contributions are significant as they provide a more reliable and accurate way to evaluate clustering and community detection algorithms. The PMI2 method's properties make it a valuable tool for practitioners to choose better algorithms for their datasets. The paper's theoretical proofs and experimental results also provide a deeper understanding of clustering comparison metrics and their limitations.
Weaknesses: One potential weakness of the paper is that it does not compare the PMI2 method with other recently proposed clustering comparison metrics. While the paper provides a thorough analysis of the limitations of existing metrics, it would be valuable to compare the PMI2 method with other state-of-the-art methods to demonstrate its superiority.
Another weakness is that the paper does not provide a detailed explanation of the approximations used in the PMI2 method. While the paper mentions that the method has fast approximations, it would be helpful to provide more information on how these approximations work and how they affect the accuracy of the method.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: A more extensive comparison of the PMI2 method with other recently proposed clustering comparison metrics to demonstrate its superiority would help to further validate the effectiveness of the PMI2 method.
A more detailed and intuitive explanation of the approximations used in the PMI2 method and how they affect its accuracy would help readers to better understand the method and its limitations.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and constructive feedback. We have taken all comments into consideration and summarize our response as follows:
1. **Further comparison to other methods**
We added a table comparing a total of 19 clustering comparison metrics (See attached PDF), mostly adapted from a systematic review of cluster comparison metrics [1]. We added a column for type II bias and the $\\operatorname{PMI}\_q$ for $q\\in\\{1,2\\}$. For all metrics except the $\\operatorname{PMI}\_q$, we provide examples that violate type II unbiasedness in the appendix.
---
2. **Details about approximations**
**Standardized approximation (**$\\mathbf{q=2}$**):** Intuitively, we approximate the true (discrete) distribution of $\\operatorname{MI}\_q$ with a (continuous) normal distribution (See Figures 2a and b). We added this intuition to Section 5.1. As per Reviewer i4ZB's comment, we remodeled Figure 2c, to highlight the effect on accuracy (See attached PDF).
**Monte Carlo approximation:** Given two clusterings $A$ and $B$, we sample contingency matrices (c.f. Table 1) with their cluster sizes $\\{a\_1, \\dots, a\_{k\_A}\\}, \\{b\_1, \\dots, b\_{k\_B}\\}$ using [2,3]. The fraction of matrices with $MI\_q$ lower than $MI\_q(A,B)$ then approximates the true $p$-value. We added this explanation to Section 5.2.
[1] Gösgens, M. M., Tikhonov, A., & Prokhorenkova, L. (2021, July). Systematic analysis of cluster similarity indices: How to validate validation measures. In _International Conference on Machine Learning_ (pp. 3799-3808). PMLR.
[2] Boyett, J. M. (1979). Algorithm as 144: Random r× c tables with given row and column totals. _Journal of the Royal Statistical Society. Series C (Applied Statistics)_, _28_(3), 329-332.
[3] Patefield, W. M. (1981). Algorithm AS 159: an efficient method of generating random R× C tables with given row and column totals. _Journal of the Royal Statistical Society. Series C (Applied Statistics)_, _30_(1), 91-97. | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We thank you all for the insightful and constructive suggestions, which helped further polish our paper. We attached a PDF with three improvements to our paper that were stimulated by the reviewers' comments:
- We added a table comparing the proposed $\operatorname{PMI}_2$ to 18 other clustering comparison metrics from the literature.
- We conducted two more experiments using spectral clustering on an image segmentation and a texture classification dataset.
- We modified Figure 2c to differentiate different dataset sizes $N$ better visually.
We included numerous other clarifications, additions, and reformulations in the manuscript, which we summarize in the direct replies to the reviews.
Pdf: /pdf/e7a145b6e2ae4cb230c1144de15a3223736e4277.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The paper proposes an improved clustering comparison metric. While many current metrics fix a "type I" bias (being biased towards certain cluster size distributions), many still suffer from a "type II" bias (a bias towards certain clusterings when they are compared to a ground truth clustering). Building on top of the Adjusted Rand Index, the type II bias is analyzed and a fast and a p-value adjusted variant is introduced which is type II unbiased. A previous approach also solves this issue (SMI), however, it suffers from being non-monotonous, while the proposed metric is monotonous, i.e. when modifying a clustering in such a way that it objectively becomes better, the metric reflects this. While the proposed PMI metric relies on Monte Carlo approximation and thus is fairly costly, the SMI_2 metric is fast to compute and with a small modification seems to be a good approximation to the PMI_2. Results on three different datasets show that the best selected clusterings according to the PMI metric seem more reasonable when compared to two baselines.
Strengths: - The paper is fairly easy to follow and seems to be well written. Even though I am not an expert in this field, I appreciate that the paper guided me through the most important considerations and different steps to get to the proposed metric.
- Monotonicity is a reasonable aspect to consider from a clustering comparison metric, as also highlighted by Gösgens et al. [9]. Properly defining this concept and showing that the proposed PMI_q metric is monotonous for q>=2 is a valuable contribution.
- The fast approximation of PMI_2 makes the proposed metric more practically applicable.
- Code is provided.
Weaknesses: My main concern is the overall practical significance of the proposed metric. While SMI is not monotonous according to Gösgens et al. [9], there seems to be empirical evidence for SMI_q. A proper proof for PMI_q is very nice, but the real-world experiments are based on the SMI_2 based approximation of PMI_2. In my mind the obvious baseline to compare to, would be SMI_2, but this is omitted for some reason and I really wonder why. Even though PMI_2 seems to select the more reasonable clusterings in all experiments, the margin to AMI_2 is not very large in two of the three setups and I would not be surprised if it is even smaller for the SMI_2 score. Adding such a comparison would give a more complete picture of how the different metrics compare.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Given that you only theoretically prove that PMI_2 is type II unbiased, but you then in practice approximate it with SMI_2, is there some guarantee that this property still holds?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: Limitations or potential negative impact have not been discussed in the paper. I would not be able to come up with a direct negative societal impact myself, but a discussion of potential limitations of the approach (if any are clear) would have been valuable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments and constructive feedback. We have taken all comments into consideration and summarize our response as follows:
1. **Concern about practical significance**
- While technically the $\\operatorname{SMI}\_2$ was introduced in [1], the authors only provide the runtime complexity $\\mathcal{O}(N^3 k\_A \\max\\{k\_A,k\_B\\})$ of the $\\operatorname{SMI}\_q$ for general $q$, which is intractable in many real situations. Therefore, to the best of our knowledge, the $\\operatorname{SMI}\_2$ has not seen any adoption in practice. We reformulate the $\\operatorname{SMI}\_2$ and provide an implementation with a runtime of $\\mathcal{O}(k\_A k\_B)$, thus opening the $\\operatorname{SMI}\_2$ for practical applications. Further, we normalize the $\\operatorname{SMI}\_q$ in such a way, that it is closely related to the $\\operatorname{PMI}\_q$. The latter is monotonous for $q=2$ but not $q=1$, providing further arguments for practitioners to chose the $\\operatorname{SMI}\_2$ over the $\\operatorname{SMI}\_1$.
- Ultimately, the nice theoretical guarantees only hold for the $\\operatorname{PMI}\_2$ and not the $\\operatorname{SMI}\_2$. Therefore we also introduce an unbiased Monte Carlo estimator that retains these guarantees up to a tunable error bound at the cost of a higher runtime. Practitioners can chose our MC implementation if theoretical guarantees are required.
- The difference between $\\operatorname{AMI}\_2$ and $\\operatorname{PMI}\_2$ in Figure 4 is subtle, but this is to be expected. Type II bias correction is just one step forward in clustering comparison and does not turn existing assessments based on metrics like the $\\operatorname{AMI}\_2$ on its head. We included this train of thought into the discussion in Section 6.
---
2. **Why we base our real-world experiments on $\\Phi(\\operatorname{SMI}\_2)$**
While the differences between the $\\operatorname{PMI}\_2$ and the $\\operatorname{AMI}\_2$ in Figure 4 are already subtle, the difference between the two approximation schemes would be virtually non-existent (as you suspected). We think that for many applications, the practical benefits of the $\\Phi(\\operatorname{SMI}\_2)$ approximation outweigh the theoretical guarantees of a Monte Carlo approximation (See Section 5), which is why we chose it for the real-world experiments. As the $\\Phi(\\operatorname{SMI}\_2)$ is just a normalized $\\operatorname{SMI}\_2$, a comparison to the latter would not provide further insight. We reformulated the explanation in Section 5 and 6 to explain this choice better.
---
3. **Discussion about limitations**
A limitation of the $\\operatorname{PMI}$ is its computational complexity. We provide an efficient standardized approximation to address this limitation. However, strictly speaking, the standardized approximation loses the theoretical guarantees of the $\\operatorname{PMI}$. We mitigate this limitation by also introducing a Monte Carlo approximation that retains the theoretical guarantees up to a tunable approximation error, at the cost of higher runtime. We explicitly added a discussion of these two limitations to the manuscript. (See also our comment to Reviewer i4ZB)
[1] Romano, S., Vinh, N. X., Bailey, J., & Verspoor, K. (2016). Adjusting for chance clustering comparison measures. _The Journal of Machine Learning Research_, _17_(1), 4635-4666.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal opinion
Comment: Thank you for the detailed answers to my and other reviewers questions. I do feel like with the additional provided results the paper will be stronger.
I certainly appreciate that the paper introduces an efficient SMI_2 implementation, nevertheless, I still think the initially submitted paper pushes the idea that PMI_2 is type II unbiased and that this is the real metric you are presenting. As you say yourself, the nice guarantees do not hold for SMI_2 and thus they don't hold for the efficient PMI_2 approximation either. In the end this boils down to the paper selling method A, stating the type II unbiasedness is important, but then evaluating method B, which is not type II unbiased. As such I find that the paper isn't really written in the clearest of ways and given that I can't really see your discussion of this in the revised paper, I find it a bit difficult to now increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback and for acknowledging the additional results we've provided to strengthen the paper.
We present the $\\operatorname{PMI}\_2$ as an ideal method, with the limitation that it is computationally intractable. Therefore we provide two ways to trade exactness for speed.
1. **Why do we stress the $\\operatorname{PMI}\_2$?**
The $\\operatorname{AMI}\_2$ can be seen as a first-order correction, adjusting the $\\operatorname{MI}\_2$ by its first statistical moment. Standardization $\\operatorname{SMI}\_2$ (also known as Z-Score) corrects for the second statistical moment, and one might ask how to include the third, fourth, and higher moments. The $p$-value comprises all the information about the distribution. In that sense, the $\\operatorname{PMI}\_2$ is the ultimate goal of such a discussion incorporating all statistical moments (think Gram Charlier A Series).
More than that, we prove the $\\operatorname{PMI}\_2$ to be type II unbiased and monotonous. While computationally intractable, we provide two ways to trade exactness for speed.
---
2. **Trade-off between exactness and speed**
We stress the $\\operatorname{PMI}\_2$ because it allows for multiple, fundamentally different approximation approaches, two of which are presented in the paper:
- **Monte Carlo** preserves the theoretical properties up to a gaussian error $a$ (in the central limit theorem). The error can be reduced at the expense of a higher runtime. For errors < 0.001, for example, the MC approximation is in many cases faster than the $\\operatorname{SMI}\_1$ (See Figure 3a, we added a=0.001 to the legend). However, it is still inefficient compared to $\\Phi(\\operatorname{SMI}\_2)$.
- **$\\Phi(\\operatorname{SMI}\_2)$** can be understood as a second order Gram Charlier A expansion as outlined above. Higher orders could also be calculated using Eq. 19 in Appendix C, but we focus on the second order for simplicity. An exact error estimation is difficult and beyond the scope of this paper. Therefore we study the accuracy of this approach experimentally (Figure 2c) and show a comparison to the Monte Carlo trade-off in Figure 3b. For an empirical study of the type II unbiasedness of $\\operatorname{SMI}\_2$ see Figure 10 in [1]. We conclude that the MC approach should be used when theoretical guarantees are required and the $\\Phi(\\operatorname{SMI}\_2)$ should be used when the MC approach is too slow.
Unfortunately, we cannot upload a revised version of the manuscript. However, we will include the discussion in 2. in Section 5.
[1] Romano, S., Vinh, N. X., Bailey, J., & Verspoor, K. (2016). Adjusting for chance clustering comparison measures. The Journal of Machine Learning Research, 17(1), 4635-4666. | null | null | null | null | null | null |
AMDP: An Adaptive Detection Procedure for False Discovery Rate Control in High-Dimensional Mediation Analysis | Accept (spotlight) | Summary: This paper proposes a high-dimensional mediation analysis detection procedure with false discovery control. Classic FDR control for multiple testing does not use information across tests making it overly conservative. The authors propose a local FDR-based procedure for identifying mediators and a data-driven approach for estimating the null densities and determining thresholds. Experiments show the proposed approach enjoys high power while still maintaining good FDR control.
Strengths: The application of multiple testing on high-dimensional mediation analysis is interesting.
The local FDR uses across-dimensional information, and the test statistics and thresholds can be estimated from data.
The proposed algorithm maximises power while maintaining FDR control.
Applying a local FDR-based method mediation analysis problem is not trivial. The data-driven approach proposed in Section 2.2 could be useful in other FDR control applications (e.g. how the null densities are obtained).
Weaknesses: This paper is very technical. I would appreciate some more introduction and motivation:
What is local FDR?
Why do we want to use local FDR in mediation analysis?
What may be the obstacles when using local FDR in mediation analysis?
If the authors could present answers to these questions at the beginning, it would make the paper feel more motivated.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: None.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions. We will add more introduction and motivation in the revised version. Now we respond the comments raised by you point-by-point.
**Question 1:**
* What is local FDR?
**Response:** In the context of multiple hypothesis testing, the false discovery rate (FDR) is a measure that quantifies the proportion of false positives among the total number of rejected null hypotheses. The local false discovery rate takes this concept one step further and aims to estimate the false discovery rate for each individual test or hypothesis separately (Efron et al., 2001). It represents the posterior probability that a hypothesis is null, given its corresponding p-value. The local FDR plays an important role in multiple hypothesis testing as it considers the varying degrees of uncertainty and statistical evidence supporting or refuting each individual hypothesis, and it allows for a more nuanced and accurate evaluation of the significance of each hypothesis by leveraging information across large-scale tests.
**Question 2:**
* Why do we want to use local FDR in mediation analysis?
**Response:** High-dimensional mediation analysis is often associated with a multiple testing problem for detecting significant mediators. Assessing the uncertainty of this detecting process via false discovery rate (FDR) has garnered great interest. To control the FDR in multiple testing, _**two essential steps are involved: ranking and selection.**_ Existing approaches either construct p-values without calibration or disregard the joint information across tests, leading to conservation in FDR control or non-optimal ranking rules for multiple hypotheses. In contrast, local FDR allows for assessing the significance of each individual mediation path separately while considering the multiple testing aspect. Moreover, it produces the optimal rule for ranking hypotheses, as demonstrated in Theorem 1 of the manuscript. In summary, the proposed local FDR improves the accuracy and reliability of identifying significant mediation paths between variables in multiple hypothesis testing.
**Question 3:**
* What may be the obstacles when using local FDR in mediation analysis?
**Response:** The main challenge we encounter pertains to estimate the local FDR within the framework of mediation analysis. In this context, the density of p-values follows a mixture distribution, as indicated in Equation (4). This mixture distribution involves two distinct types of null hypotheses: $H_{01}$ (corresponding to $f_{01}(p)$) and $H_{10}$ (corresponding to $f_{10}(p)$). Effectively distinguishing between these two components and obtaining precise estimations for $f_{10}(p)$ and $f_{01}(p)$ present a formidable obstacle. Motivated by the knockoff method (Barber et al., 2015), we explore a strategy rooted in the symmetry property exhibited by p-values under the composite null hypothesis, see lines 183-194 for more details.
We will rewrite our introduction section and integrate the motivation above in revised version to enrich the readability of the article.
**Reference:**
Barber, R. F., & Candès, E. J. (2015). Controlling the false discovery rate via knockoffs. The Annals of Statistics, 43(5), 2055 – 2085.
Efron, B., Tibshirani, R., Storey, J. D., & Tusher, V. (2001). Empirical Bayes analysis of a microarray experiment. Journal of the American statistical association, 96(456), 1151-1160.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' replies and encourage authors to revise their papers as they mentioned in the reply.
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: Thank you very much for your careful review and constructive comments, which have much contributed to improving the paper. | Summary: This paper describes a method to control for false discoveries in high-dimensional mediation analysis. This is the problem of inferring whether any of a large set of possible mediator variables act as a mediator between an exposure variable and an outcome variable. This work is particularly interested in the application where the exposure is a single-nucleotide polymorphism (SNP), the exposure is the expression level of a gene, and the mediators are the methylation levels at a set of CpGs. It is known that methylation is involved in the regulation of gene expression, so in this setting one wishes to discover whether the SNP is affecting the methylation at a CpG and in turn this is affecting the level of gene expression of a gene. This is an interesting problem and one of interest when seeking to uncover the functional effect of genetic variants. In this setting, there will typically be many CpGs as possible mediators, so we are in the realm of high-dimensional statistics and need to account for the fact that we are running a large number of tests in parallel. Previous methods for this problem are deemed to be too conversatve, sacrificing power for the suppression of false discoveries, and the authors seek to identify an approach which has improved power.
The method is verified on simulated data and illustrated on a prostate cancer data-set, where it is shown to identify more (SNP, CpG, gene) triples than existing methods
Strengths: - The paper tackles a problem of practical interest
- The method is justified theoretically (I did not check the proofs)
- The method is justified empirically on simulated data and illustrated on a real-world data set
Weaknesses: The presentation felt a bit compressed with many relevant details deferred to the appendix, such as the examples described in Remark 2 and many of the plots relating to the prostate cancer data-set.
The authors provide limited information on the context, assuming a reader will be familiar with genetic and epigenetic analysis.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: No questions
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: I do not believe that the method has any obvious negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question 1:**
* The presentation felt a bit compressed with many relevant details deferred to the appendix, such as the examples described in Remark 2 and many of the plots relating to the prostate cancer dataset.
**Response:** Thanks for you carefully to review our paper. We will make adjustments to the typesetting of both the main text and appendix, as well as revise and go through the content in the updated version.
**Question 2:**
* The authors provide limited information on the context, assuming a reader will be familiar with genetic and epigenetic analysis.
**Response:** Thanks for your insightful comments to improve the quality and depth of our work. In the revised version, we will expand on the information in the field of genetics and epigenetics, offering a more detailed and comprehensive explanation. | Summary: The paper proposes and analyses a new method for FDR control.
Strengths: The paper is clear on the method, the assumptions and the proofs. The empirical section is promising and adequate.
Weaknesses: Some of the assumptions and the writing are not clear, or seem too strong to me. For example the assumption about "no confounding" which is not listed as a formal assumption but is mentioned in the text, and attributed to another paper. It would be helpful is the authors could provide an explanation about why this is not a limiting assumption.
The assumption 2 seems reasonable, but assumption 1 again seems limiting, with a few distributions that could satisfy it. It is not a paper breaking assumption, but it would be useful if the authors are upfront about it, and discuss which possible distributions satisfy it. Similarly, about assumption 2 it would be helpful if the authors could write a couple of sentences discussing what the assumption means and implies.
The abstract and the intro makes a big deal about the curse of dimensionality becoming a blessing, but this was not obvious to me from the paper's text. could the authors explain what they mean by this exactly ? could you point out where this is discussed in the paper?
I thought the "limitations" section was required this year for neurips submissions. but this is missing, and the discussions section at the end is not comprehensive about this.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see the weakness section.
Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 9J5U for your comments and for citing your concerns. 9J5U’s main concern is that some assumptions in our paper seem too strong. We can provide some explanations.
Major questions:
**Question 1:**
* The reviewer concern about the assumption about "no confounding".
**Response:** To be more precise, this assumption is well known as the sequential ignorability assumption, as established in the modern causal inference framework (Imai et al., 2010; Valeri et al., 2013). Specifically, this assumption comprises two components: (1) "no unmeasured confounding" of the exposure-outcome relationship and (2) "no unmeasured confounding" of the mediator-outcome relationship. Under the sequential ignorability assumption, the Natural Indirect Effect (NIE) can be identified and is equivalent to the product of $\alpha$ and $\beta$ in our model (Imai et al., 2010). In the revised version, we have taken the reviewer's feedback into account and further clarified the language by switching "no confounding" to "no unmeasured confounding" in Section 2.1. This change helps ensure a clearer and more precise description of our assumptions and their implications for the mediation analysis.
Although the sequential ignorability assumption is a common assumption in traditional mediation analysis, it's worth noting that certain studies have delved into the intricacies of the hidden confounder issue, as mentioned in Song et al. (2020) and Song et al. (2021). Given this perspective, we will further consider extending our current work to settings where hidden confounders are present.
**Question 2:**
* The reviewer asked for explanations of Assumptions 1-2.
**Response:** Our method extracts a pair $(p_1, p_2)$ for each exposure-mediator-outcome relationship and employs these pairs to estimate the FDP on a two-dimensional plane $[0,1] \times [0,1]$. The theoretical basis supporting FDP estimation is the assumption that p-values are uniformly distributed under the null hypothesis, a widely recognized principle (Benjamini et al., 1995; Hung et al., 1997). Due to the presence of a composite null hypothesis in the mediation effect, we elaborate on Assumptions 1-2 to illustrate the properties of the p-value distribution under composite null hypothesis.
(1) For Assumption 1, under $H_{00}$, both $p_1$ and $p_2$ obey the uniform distribution, resulting in $(p_1, p_2)$ also following the uniform distribution on the two-dimensional plane $[0,1]\times[0,1]$. Consequently, the sampling distribution of $(p_1, p_2)$ is symmetrical around $p_1=0.5$ and $p_2=0.5$. Under $H_{01}$, $p_1$ still obeys the uniform distribution, but $p_2$ does not, leading to $(p_1, p_2)$ being only symmetrical about $p_1=0.5$ on $[0,1]\times[0,1]$. Similarly, under $H_{10}$, $p_2$ obeys the uniform distribution, resulting in $(p_1, p_2)$ being only symmetrical about $p_2=0.5$ on $[0,1]\times[0,1]$. It is essential to emphasize that Assumption 1 specifically applies to the null mediators.
(2) For Assumption 2, since a non-null p-value theoretically lies within $[0, 0.5)$, it implies that the rejection region $S$ is a subset of $[0, 0.5) \times[0, 0.5)$ without requiring additional information. Consequently, $S$ and its symmetric regions do not overlap. Moreover, as the sample size $n$ tends to infinity, the probability of p-values under alternative hypotheses falling within $[0.5, 1]$ approaches zero. For example, as $n$ goes to infinity, p-values under $H_{11}$ and $H_{10}$ are not expected to fall within the region $\tilde{S_{01}}$ because the region $\tilde{S_{01}}$
theoretically only contains p-values under $H_{00}$ and $H_{01}$. Similarly, the region $\tilde{S_{10}}$
theoretically only includes p-values under $H_{00}$ and $H_{10}$.
**Question 3:**
* The reviewer asked for explanation of “the curse of dimensionality becoming a blessing”.
**Response:** The core idea behind our method is to extract a set of $(p_1, p_2)$ for each exposure-mediator-outcome relationship and constructing a p-value-based statistic, which we refer to as the local FDR. We can estimate the local FDR by treating each set of p-values as a sample point. It effectively converts the high-dimensional problem into a large sample scenario. The increasing dimensionality yields more accurate estimates of the local FDR involving the nonparametric density estimation. Consequently, as the dimension increases, we can better control the FDR, making our method more effective in practical applications.
**Question 4:**
* The reviewer asked for "limitations" section.
**Response:** As previously discussed in Question 3, our article focuses on addressing high-dimensional mediation effects. Consequently, the estimation of local FDR and FDP might not be as effective for low-dimensional mediation problems. We will rewrite our discussion section and incorporate the limitation in revised version to provide a more comprehensive analysis.
**Reference:**
Benjamini, Y., & Hochberg, Y. (1995), ..., Journal of the Royal statistical society: series B (Methodological), 57(1), 289-300.
Hung, H. J., O'Neill, R. T., Bauer, P., & Kohne, K. (1997), ..., Biometrics, 11-22.
Imai, K., Keele, L., & Tingley, D. (2010), ..., Psychological methods, 15(4), 309.
Song, Y., Zhou, X., Zhang, M., Zhao, W., Liu, Y., Kardia, S. L., ... & Mukherjee, B. (2020), ..., Biometrics, 76(3), 700-710.
Song, Y., Zhou, X., Kang, J., Aung, M. T., Zhang, M., Zhao, W., ... & Mukherjee, B. (2021), ..., Statistics in medicine, 40(27), 6038-6056.
Valeri, L., & VanderWeele, T. J. (2013), ..., Psychological methods, 18(2), 137.
---
Rebuttal Comment 1.1:
Comment: THank you for addressing my comments, I will update my score. I will keep the low confidence, since this is not my area and I am not very confident about the specific contributions.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 9J5U
Comment: Thank you very much for the response! | Summary: UPDATE
In light of the revisions the authors made I have raised my score and now support acceptance of this work. Thank you very much.
-----------------------------------------
The authors propose a procedure to increase statistical power to identify mediators while controlling the FDR in high-dimensional data sets. Their method leverages the proportions of composite null hypotheses and the distribution of p-values under the alternative to derive an algorithm to estimate p-values for mediator variables while controlling the FDR. They perform a theoretical analysis suggesting that their method is asymptotically optimal and showcase that it controls FDR more consistently than other methods (DACT, JC, JS-mixture) in a simulation study and identifies more mediators than JS-mixture in an empirical study suggesting higher statistical power.
Strengths: - The problem is very relevant as high-dimensional data are frequent not only in genetics, but also in other domains like neuroscience and user data, which all potentially suffer from similar problems.
- The work is very extensive comprising theoretical analysis, simulations and an empirical application of the methods.
- Related methods are extensively discussed in the introduction.
Weaknesses: - There is no real discussion of the limitations/breakpoints of this method.
- The effect sizes chosen for the simulation analysis seem to be unreasonably large (at least showing how the method behaves for smaller effect sizes, which are more commonly observed empirically, would be informative).
- The paper is occasionally quite dense and would benefit from focusing on some key aspects.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: This is very interesting work with broad applications to many empirical studies. I am willing to support acceptance of this paper, if the following concerns/questions are addressed appropriately. I hope you find my comments helpful and constructive.
**Major points**
- Can you provide a motivation for the choice of simulation parameters (i.e., sample sizes, number of mediators, FDR alpha and effect sizes). Current genetic studies tend to have larger sample sizes ( in the order of 20.000) while older studies had smaller sample sizes so including a wider range of samples e.g. 100, 500, 1000, 10000, 20000 may better cover empirical scenarios. Moreover, I don’t understand why you picked alpha = 0.1 as reference, since most studies choose alpha=0.05, for genetic studies this might even be smaller.
- In Figure 1, it appears that AMDP deviates more from the 0.1 FDR line when the effect sizes get smaller. Could you speculate on why that is? On that note, the effect sizes chosen are rather large. What happens for smaller effect sizes (up to .2 or .3). Does this deviation get larger, is this a breakpoint for the method?
- Similarly, in the empirical analysis (Figure 3), JS-mixtures identifies more triplets for alpha=0.01. Could you explain why that is, does this trend hold for alphas that are even smaller like 0.001?
- What are the conditions under which the method is not appropriate? When does it break? I am missing a clear limitation section.
**Minor point**
- p.8 ll. 288 ff. essentially contains a figure caption of a figure that is presented in the supplement, which is not very helpful for the reader. I would suggest to either summarize the take-aways from this supplementary analysis here and move the caption to the figure where it belongs or spend this space on discussion the implications including break points and limitations of the analyses you present in the main manuscript.
- Just out of curiosity, can you elaborate on how this method could be used for image analysis? Can this be applied to neuroimaging (e.g., fMRI) as well?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: - The paper does not have a clear limitation section (see Questions).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the thoughtful comments.
**Weaknesses 1:** Thanks for your reminder. Our method is designed to high-dimensional mediation analysis, but its performance may fail in the low-dimensional designs. We will rewrite our discussion section and incorporate the limitation in revised version to provide a more comprehensive analysis.
**Weaknesses 2:** We adopt the similar settings as these in Dai et al. (2023). More details about the parameter settings can be found in our reply to Question 1.
**Weaknesses 3:** Your suggestion will lead to a significant improvement of an early manuscript. The high-dimensional mediation analysis is often associated with a multiple testing problem for detecting significant mediators, which has attracted significant interest. _**To control the FDR, our article will focus on two key steps: ranking and selection.**_ We will emphasize the two key aspects in revised version to enhance the readability of the article.
**Question 1:**
* The reviewer's inquiry regarding the motivation for the choice of simulation parameter.
**Response:**
(1) In Figures 1-2, we assess how the four methods (JS-mixture, DACT (Efron), DACT (JC), and AMDP) are influenced by effect size, the large mediator size and sample size.
(2) We aimed to closely simulate real-world data scenarios. (a) We referred to several real datasets including the TCGA lung cancer cohort dataset (Zhang et al., 2021), the Multi-Ethnic Study of Atherosclerosis (Du et al., 2023), and the TCGA prostate cancer dataset (Dai et al., 2023), and then construct the simulation examples. (b) We adopt the similar parameter settings as these in Dai et al. (2023).
(3) We follow widely used FDR level 0.05 (Song et al., 2020; Guo et al., 2023) and FDR level 0.1 (Dai et al., 2023; Mosig et al., 2021).
* The reviewer asked if we could cover empirical scenarios with a wider range of sample size, and the reason why we pick alpha=0.1 as reference.
**Response:** As the response to Question 1, we will adopt the commonly used FDR levels 0.05 and 0.1. Our method is specifically designed for genetic data, as suggested by Mosig et al. (2001), we adopted FDR level of 0.1 in the original manuscript. Following your suggestion, we have integrated experiments at the FDR threshold of 0.05 with a wide range of sample sizes (200, 500, 1000, 5000). We present the experimental results under sparse alternatives scenario and dense alternatives scenario in PDF format. From Tables 1-2 in PDF format, we can observe the similarly results as these with the FDR level of 0.1. Note that when the sample size reaches 5000, the power of all four methods converges close to 1, so we do not undertake experiments with larger sample sizes of 10000 and 20000.
**Question 2:**
* The reviewer asked the reason why AMDP deviates more from the 0.1 FDR line when the effect sizes get smaller in Figure 1.
**Response:** There is the missing "+1" in Equation (10). To enhance the robustness of FDR control, our method estimates the numerator of Equation (10) as (the number of false discoveries + 1), which was used in Du et al. (2023) and Guo et al. (2022). When the effect size is smaller, it yields a small denominator for the FDP estimation. Thus, the "+1" term becomes non-negligible, leading to slightly conservative FDR controls. To mitigate this problem, the "+0.5" adjustment can be employed effectively. As depicted in Figure 1, the JS-mixture method also exhibits a similar phenomenon. The difficulty to detect significant variables with minor effect sizes results in rare cases of no rejection in some experiments, which subsequently reduces the average FDR.
* The reviewer shows concern about the effect sizes chosen in our simulation study.
**Response:** Our simulation settings encompass a wide range of effect sizes, including 0.2, 0.3, and even smaller values such as 0.12. More details can be found in the response to Question 1.
**Question 3:**
* The reviewer’ question about the trend observed in Figure 3 for different alpha levels, including alphas smaller than 0.01.
**Response:** We agree that JS-mixture identifies more triplets than our method when alpha is smaller. As the response in Question 2, our method employs a +1 adjustment in Equation (10) to enhance FDR control. As the alpha level decreases, the number of rejections is significantly reduced, and the +1 term becomes non-negligible, leading to fewer identified triplets. It's essential to point out that people usually care about the results of widely used FDR levels, such as 0.05 and 0.1.
**Question 4:**
* The reviewer asked a clear limitation section.
**Response:** Although the performance of AMDP suffers in low-dimensional designs, as stated in Weaknesses 1, our method does not exhibit a well-defined breakpoint.
**Minor question:**
* **Response:** Thanks for your valuable suggestion. We will incorporate the necessary modifications in the revised version.
* **Response:** Thanks for the interesting question. The application of our method to neuroimaging is a plausible avenue. In voxel-based analyses, our method may be used to capture active voxels in various brain regions, including the visual, auditory, and motor regions. For further inspiration, see Chang et al. (2023).
**Reference:**
Chang, J., He, J., Kang, J., & Wu, M. (2023), …, JASA, 1-14.
Dai, J. Y., Stanford, J. L., & LeBlanc, M. (2022), …, JASA, 117, 198-213.
Du, J., Zhou, X., …, & Mukherjee, B. (2023), …, Genet Epidemiol, 47, 167-184.
Du, L., Guo, X., Sun, W., & Zou, C. (2023), …, JASA, 118, 607-621.
Efron, B. (2004), …, JASA, 99, 96-104.
Guo, X., Ren, H., Zou, C., & Li, R. (2022), …, JASA, 1-13.
Guo, X., Li, R., Liu, J., & Zeng, M. (2023), …, JBES, 1-14.
Mosig, M. O., Lipkin, E., ..., & Friedmann, A. (2001), …, Genetics, 157, 1683-1698.
Song, Y., Zhou, X., ..., & Mukherjee, B. (2020), …, Biometrics, 76, 700-710.
Zhang, H., Zheng, Y., Hou, L., Zheng, C., & Liu, L. (2021), …, Bioinformatics, 37, 3815-3821.
---
Rebuttal Comment 1.1:
Title: Limitation section
Comment: Thank you very much for addressing most of my concerns. However, do I understand you correctly, that you do not plan to include a limitation section? If that is the case, I would urge the authors to strongly consider including a limitation section. They best know what these limitations could be, but I seriously doubt that there are no limitations whatsoever. Should this be included, I am willing to raise my score.
---
Reply to Comment 1.1.1:
Title: Response to reviewer
Comment: We thank the reviewer oiJi for the feedback. We genuinely value the reviewer's insights, and are willing to add a limitation section in the revised version as:
_“Our approach effectively handles high-dimensional mediators but may not perform optimally when confronted with low-dimensional mediators. This distinction is attributed to the nature of our method, wherein the two-dimensional p-values linked to each exposure-mediator-outcome relationship effectively serve as 'samples' for the estimation of local FDR and FDP. Consequently, the reduction in dimensionality can lead to less precise estimates of local FDR and FDP.”_ | Rebuttal 1:
Rebuttal: Thanks again for the comments. We present additional experimental results under sparse alternatives scenario and dense alternatives scenario with the FDR level of 0.05 in the PDF format.
Pdf: /pdf/c6d4f16413f2303c93719562e49f5b06b774cc9d.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
DensEMANN: How to Automatically Generate an Efficient while Compact DenseNet | Reject | Summary: In this paper, the authors propose an enhanced version of DensEMANN, which efficiently grows and trains small DenseNet architectures. They employ a macro-algorithm to expand new layers and utilize a micro-algorithm to construct new convolution operations. Through iterative layer growth, this method generates novel architectures within a few GPU hours.
Strengths: - Detailed experimental settings are provided in this paper.
Weaknesses: - The motivation behind this research requires additional clarification.
- As shown in Table 2, GO methods consumes the leaset GPU days and achieves the best performance. Therefore, it raises the question of why not directly utilize the GO methods.
- The experimental comparison with the original DensEMANN is missing.
- Experiments are only conducted on small datasets. Can DensEMANN be applied on large datasets, for example ImageNet-1k?
- Too many hyper-parameters are introduced in this method, which brings difficulties for applying this method on other tasks.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - By automatically growing small DenseNet architectures, what insights can we obtain in architectural design?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Please see the Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The motivation behind this research requires additional clarification.
Our main motivation was to improve upon DensEMANN in order to make it reach its "full potential". More specifically, as stated in the introduction, we designed "a new version of this algorithm with the aim of approaching [i.e. getting as close as possible to] state-of-the-art performance for well-known benchmarks, or at least the state-of-the-art Pareto front between performance and model size".
We also had a secondary goal of testing the claim in (García-Díaz and Bersini 2021) that DensEMANN-generated NN reach equal or better performance to similar NN when these are trained from scratch. That paper only provided "reliable" results for DensEMANN's micro-algorithm, as the full method was still very unstable. With an improved and stabilized DensEMANN, we were able to establish more reliable comparisons with a variety of baselines, including retraining the DensEMANN-generated NN from scratch (Section 4.2 and Appendix A.1.2), and comparing DensEMANN's performance with a naive NAS algorithm that focused only on DenseNet (Appendix A.2).
> As shown in Table 2, GO methods consumes the leaset GPU days and achieves the best performance. Therefore, it raises the question of why not directly utilize the GO methods.
Various reasons:
* The difference in GPU days between GO methods and DensEMANN (and similar growing-based algorithms such as NASH) is not that large. Both are usually at around 0.5 or 0.3 GPU days. The cause for this is likely that the two methods' basic NAS paradigms are very similar: both GO and growing-based methods look for optimal modifications to an architecture while it is being trained.
* Table 2 indeed shows that the performance of GO methods is greater than DensEMANN's, but at the cost of bigger final models (in terms of trainable parameters). Even if they were tweaked to generate small models like DensEMANN's, GO methods are often based on extracting subnetworks out of overparametrized supernetworks, which themselves are often very large.
* GO methods such as ENAS and DARTS have been criticised in the past for various reasons (see Fair DARTS, Chu et al. 2020). Their main drawbacks are the unfair advantages that supernetwork structures give to certain architecture elements (mainly skip connections in the case of DARTS), and their inability to properly represent a continuous encoding of discrete subnetwork choices (the architecture weights associated with a choice between different elements are often too close together to distinguish which choice is "best").
This said, we _would_ like to implement GO-based functionalities in future versions of DensEMANN, mainly to broaden the search space. For instance, the dense block topology could be interpreted as a supernetwork, and architecture weights could be attributed to connections between layers in order to extract an optimal non-dense subnetwork. We are also interested in GO-based methods that allow for a choice between different kernel sizes, such as superkernels (Stamoulis et al. 2019).
> The experimental comparison with the original DensEMANN is missing.
The original DensEMANN was too unstable for an experimental comparison against the full algorithm (macro- and micro-algorithm). However, we were able to run a comparison in terms of the micro-algorithm alone. The below table compares the results from the original paper (Table I of García-Díaz and Bersini 2021) to our own for the same experiment (means and standard deviations over 5 runs).
| DensEMANN version | Dataset | Architecture style | GPU execution time (hours) | Num. output filters in layer | Trainable parameters (k) | Test set acc. (%) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Original (García-Díaz and Bersini, 2021) | CIFAR-10 | DenseNet | 1.66 ± 1.06 | 11.40 ± 2.07 | 3.6 ± 0.5 | 62.74 ± 2.92 |
| | | DenseNet-BC | 1.90 ± 0.16 | 21.60 ± 0.55 | 11.8 ± 0.2 | 71.61 ± 1.11 |
| | SVHN | DenseNet | 0.26 ± 0.01 | 13.40 ± 0.55 | 4.1 ± 0.1 | 47.89 ± 3.10 |
| | | DenseNet-BC | 0.45 ± 0.12 | 17.00 ± 3.16 | 9.8 ± 1.4 | 57.81 ± 1.08 |
| Ours | CIFAR-10 | DenseNet | 1.59 ± 0.51 | 13.80 ± 1.79 | 4.1 ± 0.4 | 63.17 ± 0.76 |
| | | DenseNet-BC | 1.30 ± 0.32 | 13.20 ± 1.10 | 8.1 ± 0.5 | 69.48 ± 0.78 |
| | SVHN | DenseNet | 0.17 ± 0.04 | 10.60 ± 2.88 | 3.4 ± 0.7 | 38.59 ± 2.63 |
| | | DenseNet-BC | 0.15 ± 0.00 | 10.00 ± 0.71 | 6.7 ± 0.3 | 45.40 ± 0.36 |
This table shows that our version of DensEMANN produces smaller layers than the original algorithm, but does so faster.
> Experiments are only conducted on small datasets. Can DensEMANN be applied on large datasets, for example ImageNet-1k?
We are currently running experiments in this direction. See the global rebuttal.
> Too many hyper-parameters are introduced in this method, which brings difficulties for applying this method on other tasks.
We are currently testing DensEMANN on diverse tasks, ideally with few-to-no modifications to the hyperparameters. See the global rebuttal.
> By automatically growing small DenseNet architectures, what insights can we obtain in architectural design?
Two main insights:
* Small, densely-connected architectures can reach high and even competitive performance on benchmark tasks. This further highlights how modern CNN models are often needlessly overparameterized.
* There seems to be a limit to what one can do with DenseNet. The accuracy vs. size scatter plot in Figure 3 (at the end of Appendix A) suggests that there is a suboptimal Pareto front that is unique to small DenseNet topologies. This suboptimal front may be due to too many redundant elements in DenseNet architectures, as noted by previous works such as Log-DenseNet (Hu et al. 2017).
The above insights open up the question of how far we can push the state of the art's collective Pareto front for accuracy vs. size. One way to do this is by making DensEMANN's search space broader and less restricted (see our answer to the question on GO methods)
---
Rebuttal Comment 1.1:
Title: To Reviewer Jgdu: Please respond to the author rebuttal
Comment: Dear Reviewer Jgdu,
The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you!
-AC | Summary: The authors study an algorithm for neural architecture search (NAS) called DensEMANN, which uses a progressive adaptation of a DenseNet architecture during training to find an efficient neural network for the target task.
Strengths: The paper is very well written and the authors do a good job of explaining how the DensEMANN algorithm works.
Weaknesses: I’m not sure that the paper currently has enough substance for publication. The authors’ spend the first 5 pages on introduction and description of the previously published DensEMANN algorithm. The changes made to this algorithm are described in section 3.2 in only 20 lines of text and appear to be primarily changes to the various hyperparameters of the existing algorithm.
The primary contribution of the paper is comparison of DensEMANN with other NAS methods on CIFAR-10 in section 4.3. The results are certainly interesting, but I think the authors should focus on quality per unit time rather than quality and parameter counts plotted in Figure 2. Plotting quality against the execution times in table 2 would make it much easier to compare DenseMANN to existing methods in efficiency, which is the primary property of interest, I think.
That being said, I’m not sure empirical comparison of an existing method with state-of-the-art methods is enough novelty to merit publication at NeurIPS. I’d encourage the authors to continue to develop their exploration. For example, clearly establishing a new state-of-the-art in efficiency for NAS. Or, taking the models from DensEMANN and studying their efficiency for deployment.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: I do not have specific questions aside from those listed above.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 1 poor
Limitations: I did not identify potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The primary contribution of the paper is comparison of DensEMANN with other NAS methods on CIFAR-10 in section 4.3. The results are certainly interesting, but I think the authors should focus on quality per unit time rather than quality and parameter counts plotted in Figure 2. Plotting quality against the execution times in table 2 would make it much easier to compare DenseMANN to existing methods in efficiency, which is the primary property of interest, I think.
In the global rebuttal's enclosed figures PDF, we provide a scatter plot for error rate on CIFAR-10 vs. execution time in GPU days.
> That being said, I’m not sure empirical comparison of an existing method with state-of-the-art methods is enough novelty to merit publication at NeurIPS. I’d encourage the authors to continue to develop their exploration. For example, clearly establishing a new state-of-the-art in efficiency for NAS. Or, taking the models from DensEMANN and studying their efficiency for deployment.
We are currently exploring the application of DensEMANN to diverse datasets and application fields, and we aim to consider different kinds of NN architectures in future work. See the global rebuttal.
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: Thank you to the authors for their response. I think that the additional directions for exploration you've listed are very interesting and would encourage the authors to pursue them. However, I don't think the additional results with CIFAR-100 are enough for me to raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer kBNK,
Please excuse my late reply. I have just finished submitting my PhD thesis manuscript, which has kept me very busy during the last few days.
Thank you very much for your reply. The fact that a NeurIPS reviewer says that our research route is "very interesting", although our results don't reach the level yet, is a strong motivation to keep up our future work and research. | Summary: The paper presents a new version of DensEMANN, an algorithm for generating small and competitive DenseNet architectures with optimal weight values. The authors aim to approach state-of-the-art performance for well-known benchmarks, or at least the state-of-the-art Pareto front between performance and model size. They achieve this by introducing a new version of the algorithm that uses a combination of layer pruning and weight optimization techniques. The authors evaluate DensEMANN on three popular image classification benchmarks (CIFAR-10, Fashion-MNIST, and SVHN) and show that it outperforms or matches the state-of-the-art methods in terms of accuracy and model size. The contributions of the paper are a new algorithm for generating small and competitive DenseNet architectures, a combination of layer pruning and weight optimization techniques, and state-of-the-art results on popular image classification benchmarks.
Strengths: - The pruning and recovery stages which are the contributions of this paper are very well motivated and clearly explained. The way the pruning and recovery stages are designed is novel and using it in this context is rather unique.
- The authors have very aptly identified how DensEMANN fits in and very well introduced incremental approaches and NAS architectures.
Weaknesses: - The difference in this paper with the original DenseEMANN is clearly communicated by the authors however all the the points mentioned except 2 (d):
> The pruning and recovery stages have been heavily modified to avoid long recovery
stages and their effects. We indeed observed that the kCS of settled filters is not
constant but actually decreases very slowly over time. If the recovery stage is too long,
this causes a very harsh pruning after which the accuracy cannot be recovered.
are just based on observation or not introduced in the paper or are very straightforward changes, I would suggest to consider only 2 (d) as a contribution of this paper.
-There are other aspects of this model which are very well framed and novel however it is important to note that these parts of the DensEMANN architecture arer not introduced in this paper but the original DensEMANN paper which reduces the novelty of this work by a huge margin.
- The paper does not provide a clear explanation of how the parameter limit of 500k was chosen for the experiments. Does going above these number of parameters make DensEMANN very computationally intensive or is unable to grow the network sensibly especially considering that 500k parameters in modern comparison are very few parameters especially for vision tasks?
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - This did not affect my rating at all, but I do agree with this, I would hope that the authors could add a bit more in these statements to explain the significance of growing-based NAS algorithms:
> The most serious competitor to growing-based NAS algorithms are trainless or zero-cost algorithms
[30 , 31 , 2 ]. These evaluate candidate NN on basis of their performance with random weights. Such
methods can explore large search spaces in a matter of minutes or even seconds [31, 2]. However,
extra time is still needed for training the final candidate architecture in order to use it.
- The authors evaluate DensEMANN on three popular image classification benchmarks. Can they provide more insights into how the algorithm performs on other datasets (especially more complex larger datasets) or tasks, such as object detection or semantic segmentation (I think DensEMANN should very simply be able to approach this)? The datasets question is also a suggestion to include more experiments on that since the dataset complexity is a very important aspect in judging the efficacy of DensEMANN. However, as of the other tasks question, I was just wondering if the authors had already tried it out and faced any problems?
- A very common question from this paper is if the authors had tried replicating similar techniques for other larger similar architectures and for more modern architectures. Understanding if DensEMANN is able to generate more complex models is a very big question (and limitation)?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: - With the current set of evaluation the authors do it is very hard to determine if DensEMANN like techniques can be applied for larger and more modern models
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper does not provide a clear explanation of how the parameter limit of 500k was chosen for the experiments. Does going above these number of parameters make DensEMANN very computationally intensive or is unable to grow the network sensibly especially considering that 500k parameters in modern comparison are very few parameters especially for vision tasks?
DensEMANN does not explicitly limit the number of parameters to 500k, or any other value for that matter. Rather, it implicitly limits the architecture's size through a core philosophy of "only growing what's strictly necessary" for a significant improvement in accuracy. This core philosophy is especially visible in the macro-algorithm's accuracy-based improvement criterion, which undoes the last layer's addition if it did not cause a significant change in the NN's accuracy. The micro-algorithm's pruning-recovery loops also contribute to limiting the size of each new layer to a bare minimum.
> This did not affect my rating at all, but I do agree with this, I would hope that the authors could add a bit more in these statements to explain the significance of growing-based NAS algorithms
Various authors in the past have characterized NAS as a bilevel problem, consisting of a parameter optimization problem nested into an architecture optimization problem (see e.g. "A Survey on Evolutionary Construction of Deep Neural Networks", Zhou et al. 2021).
In growing and/or pruning algorithms, the aim is to parallelize these two levels as much as possible, optimizing the neural architecture and its weights at the same time. Meanwhile, in zero-cost approaches, the goal is to serialize the two levels, and to postpone the most computation-heavy of these two components (the training of candidate architectures) as much as possible. This said, as pointed out by White et al. (2022) in their ICLR blog post "A Deeper Look at Zero-Cost Proxies for Lightweight NAS", even if one uses zero-cost training proxies to optimize the NN's architecture, one still needs to optimize the architecture's weights (i.e. train the network) in order to achieve optimal performance on a target task.
Furthermore, when used on their own for performance prediction, known zero-cost training proxies have got multiple disadvantages such as unreliable performance on different target tasks, and inherent biases towards certain topology patterns (again, see the blog post by White et al. 2022). Growing and pruning-based NAS algorithms can avoid these disadvantages in a simple and efficient way: by simultaneously training the network on the target dataset while suggesting and trying out changes to its neural architecture.
> The authors evaluate DensEMANN on three popular image classification benchmarks. Can they provide more insights into how the algorithm performs on other datasets (especially more complex larger datasets) or tasks, such as object detection or semantic segmentation (I think DensEMANN should very simply be able to approach this)? The datasets question is also a suggestion to include more experiments on that since the dataset complexity is a very important aspect in judging the efficacy of DensEMANN. However, as of the other tasks question, I was just wondering if the authors had already tried it out and faced any problems?
We are currently running experiments on larger and more complex datasets (in particular ImageNet1k), different kinds of tasks and application fields. See the global rebuttal.
> A very common question from this paper is if the authors had tried replicating similar techniques for other larger similar architectures and for more modern architectures. Understanding if DensEMANN is able to generate more complex models is a very big question (and limitation)?
This is one of our main planned research routes for future work (see the global rebuttal). We do believe that DensEMANN-based approaches are not limited to DenseNet, or even CNN. We are in particular interested in using a (Dens)EMANN-like method for growing RNN and Transformer networks.
---
Rebuttal Comment 1.1:
Title: Response to the authors rebuttal
Comment: > DensEMANN does not explicitly limit the number of parameters to 500k, or any other value for that matter. Rather, it implicitly limits the architecture's size through a core philosophy of "only growing what's strictly necessary" for a significant improvement in accuracy. This core philosophy is especially visible in the macro-algorithm's accuracy-based improvement criterion, which undoes the last layer's addition if it did not cause a significant change in the NN's accuracy. The micro-algorithm's pruning-recovery loops also contribute to limiting the size of each new layer to a bare minimum.
I see, that makes sense, although I was wondering if 500k is the number of parameters after which DensEMANN is unable to generate useful parameters for the architectures and tasks you describe? For the lesser number of parameters, the results are impressive and I'm not undermining that but I was just wondering about this.
> Various authors in the past have characterized NAS as a bilevel problem, consisting of a parameter optimization problem nested into an architecture optimization problem (see e.g. "A Survey on Evolutionary Construction of Deep Neural Networks", Zhou et al. 2021).
Thanks, in my original review I was meaning if you could probably add this to the paper itself (a mere suggestion), which I think would be very helpful for readers to better understand the significance of the problem you pose.
> We are currently running experiments on larger and more complex datasets (in particular ImageNet1k), different kinds of tasks and application fields. See the global rebuttal.
Thanks for including and answering multiple of my questions in the global rebuttal especially about other architectures and datasets and tasks, as well as sharing a comparision. I do understand that running experiments on large datasets can be a bit difficult given the short rebuttal period.
- For the CIFAR-100 experiments you do not limit the number of parameters and just let the network grow, right?
- As for the comparisons and experiments, it seems that after a certain number of parameters, DensEMANN is not able to generate parameters over some certain limit and thus just increasing execution time like for other architectures does not lead to considerable improvements. Mainly because the DenseNet trained for CIFAR-10 shown in the original paper has 25.6 M parameters, seeing DensEMANN generates very few parameters (<500k) after which it seems based on your rebuttal that it is unable to grow it sensibly?
- I was wondering if there was an explanation for why DensEMANN-generated models do not work really well for CIFAR-10? (I am not fixated on this paper beating SoTA but an explanation for these results would be very helpful)
---
Reply to Comment 1.1.1:
Comment: Dear reviewer aDQ3,
Please excuse my late reply. I have just finished submitting my PhD thesis manuscript, which has kept me very busy during the last few days.
> I see, that makes sense, although I was wondering if 500k is the number of parameters after which DensEMANN is unable to generate useful parameters for the architectures and tasks you describe? For the lesser number of parameters, the results are impressive and I'm not undermining that but I was just wondering about this.
Yes indeed, in the sense that any newly grown architecture elements do not bring a significant improvement to the accuracy (as established by the macro-algorithm's improvement threshold).
> Thanks, in my original review I was meaning if you could probably add this to the paper itself (a mere suggestion), which I think would be very helpful for readers to better understand the significance of the problem you pose.
Thanks a lot for this suggestion! We will indeed add this to the paper.
> Thanks for including and answering multiple of my questions in the global rebuttal especially about other architectures and datasets and tasks, as well as sharing a comparision. I do understand that running experiments on large datasets can be a bit difficult given the short rebuttal period.
> * For the CIFAR-100 experiments you do not limit the number of parameters and just let the network grow, right?
Indeed, as for all other datasets.
> * As for the comparisons and experiments, it seems that after a certain number of parameters, DensEMANN is not able to generate parameters over some certain limit and thus just increasing execution time like for other architectures does not lead to considerable improvements. Mainly because the DenseNet trained for CIFAR-10 shown in the original paper has 25.6 M parameters, seeing DensEMANN generates very few parameters (<500k) after which it seems based on your rebuttal that it is unable to grow it sensibly?
As explained above, whether an improvement is "significant" or not depends on the improvement threshold. By default, we set it to IT=0.01 (i.e. an improvement of 1 percentage point). In some settings, this may be seen as too strict of a limitation. If a lower IT value is set, then the algorithm will also accept more gradual improvements in the accuracy, and in that case perhaps the final DenseNet would reach various millions of parameters.
This can be compared to the different DenseNet-BC architectures proposed in the original paper. As the number parameters was made to grow linearly at a constant rate (0.8M -> 15.3M -> 25.6M parameters), the error percentage decreased at an increasingly smaller rate (4.51% -> 3.62% -> 3.46%). Thus, adding the first 15M parameters causes an increase of almost 1 percentage point in accuracy, but adding 15M more parameters only causes an increase of 0.16 percentage points.
> * I was wondering if there was an explanation for why DensEMANN-generated models do not work really well for CIFAR-10? (I am not fixated on this paper beating SoTA but an explanation for these results would be very helpful)
Some hypotheses:
* If the improvement threshold was set to a lower value, the accuracy may improve. This said, the parameter count would also increase...
* In Annex A.3, we consider the possibility that DensEMANN's search space is too limited, and will always be bound to a suboptimal accuracy-vs-size Pareto front. Relaxing the search space's constraints (e.g. adding a broader choice of block-level configurations, some sparsity to the network's connections, or different kernel sizes) could enable the algorithm to reach more optimal results.
* The training process could be fine-tuned. Taking the bilevel nature of NAS into account:
* Further regularization or data augmentation techniques could be used for training the learnable weights.
* Different growing-then-pruning operations (like our improvement and pruning stages) may be scheduled **solely to improve the learning process**. We are currently considering the possibility that, as in biological neural networks, **a quick and large growth** in neural capacity followed by **a harsh but judiciously chosen pruning** may result in improved learning capacity: the growing phase increases the network's pattern representation power, and the pruning phase keeps only the "essential" elements of the learnt patterns. | Summary: This paper proposes a new version of the existing DensEMANN, which grows small DenseNet architectures and trains them on target data. It claims that this version can quickly and efficiently search for small and competitive DenseNet architectures. The proposed approach has been evaluated on a number of benchmarks.
Strengths: - The idea of automatically generate efficient architectures from a reference makes sense, and can be of great interests in many application scenarios.
- The proposed approach grows the architecture at both macro and micro levels, which seems to be a valid strategy.
- The proposed approach has been evaluated on various benchmarks, showing comparable performance with the state of the art.
Weaknesses: - The delta compared to the original algorithm seems to be not very significant.
- The claim on being able to generate efficient densenet architectures is only supported by the number of parameters. For densenet like models the number of parameters might not be a good indicator for efficiency, due to many skip connections. Thus it seems not a very fair comparison.
- It is not clear how the proposed approach performs on larger datasets such as ImageNet.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - How this approach performs on Imagenet?
- How the FLOPs/latency of the discovered models comparing with state of the art?
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > How this approach performs on Imagenet?
We are currently testing DensEMANN on ImageNet1k. See the global rebuttal.
> How the FLOPs/latency of the discovered models comparing with state of the art?
In the below table we report the latency (in MFLOPs) of our discovered models for different datasets (CIFAR-10, Fashion-MNIST, SVHN, CIFAR-100) and settings of DensEMANN (with and without block replication, with and without CutOut regularization). The default configuration is "w/ repl. + cutout", i.e. with block replication and CutOut.
| Dataset | DensEMANN setting | Latency (MFLOPs) |
|:---:|:---:|:---:|
| CIFAR-10 | w/o all | 56.57 ± 9.43 |
| | w/ cutout | 53.47 ± 5.67 |
| | w/ repl. | 74.81 ± 16.36 |
| | w/ repl. + cutout | 78.35 ± 22.85 |
| Fashion-MNIST | w/o all | 8.39 ± 2.34 |
| | w/ cutout | 11.75 ± 7.15 |
| | w/ repl. | 11.79 ± 4.98 |
| | w/ repl. + cutout | 17.41 ± 8.94 |
| SVHN | w/o all | 80.67 ± 10.24 |
| | w/ cutout | 73.91 ± 35.17 |
| | w/ repl. | 90.15 ± 43.86 |
| | w/ repl. + cutout | 139.75 ± 24.43 |
| CIFAR-100 (IT=0.01) | w/ repl. + cutout | 105.05 ± 19.23 |
| CIFAR-100 (IT=0.005) | w/ repl. + cutout | 156.31 ± 15.87 |
N.B.: The full results for the CIFAR-100 architectures (including latency) are reported in the global rebuttal.
In the below table we report the latency of the DenseNet-BC generated with the naive NAS baseline from Appendix A.2:
| Dataset | Naive NAS setting | Latency (MFLOPs) |
|:---:|:---:|:---:|
| CIFAR-10 | N = 3, MPC = None | 92.54 ± 12.36 |
| | N = 3, MPC = 200k | 75.45 ± 0.00 |
| | N = 1, MPC = None | 84.51 ± 19.08 |
| | N = 1, MPC = 60k | 61.01 ± 0.00 |
For a comparison with a similar NN in the state of the art, according to the PyTorch docs, EfficientNet-B0 (Tan and Le 2020) has got a latency of 390 MFLOPS, and contains 5.29M trainable parameters. Its authors claim that it reaches 98.1% accuracy on CIFAR-10, but they give it a smaller size of 4M parameters.
Also according to the PyTorch docs, MobileNet v2 (Sandler et al. 2019) has got a latency of 300 MFLOPs and contains 3.50M trainable parameters. MobileNet v3 (Howard et al. 2019) has got a latency of 60 MFLOPs, which is closer to our results, but with 2.54M parameters it is still much larger. As for ShuffleNet v2 0.5×, it has got a latency of 40 MFLOPs and 1.37M parameters. We believe that studying these architectures will be beneficial for reducing the latency of our auto-generated models.
---
Rebuttal Comment 1.1:
Title: To Reviewer JTZn: Please respond to the author rebuttal
Comment: Dear ReviewerJTZn,
The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you!
-AC | Rebuttal 1:
Rebuttal: ### Concerning larger datasets (in particular ImageNet1k):
* We have run extra tests on CIFAR-100.
The dataset split was identical to that of CIFAR-10: a training set of 45,000 random "training" images, a validation set of 5,000 random "training" images not already in the training set, a test set consisting of all 10,000 "test" images in the original dataset.
The preprocessing was identical to that of CIFAR-10: random crop + horizontal flip, normalization (with the same mean and SD values as CIFAR-10), cutout regularization.
DensEMANN's configuration is the same as for all other tests in the main paper, although we also tried to set the improvement threshold to IT = 0.005 to see if it affected performance significantly.
The time measurements for IT = 0.01 were all taken *sequentially* (i.e. one test at a time) on an MSi GT76 laptop. For IT = 0.005, they were performed *in parallel* in our internal cluster (up to three tests at the same time), and so we consider them to be less reliable. (See the paper for the full specs.)
The results (mean and standard deviation over 5 runs) are shown in the tables below:
| Dataset | GPU execution time (hours) | GPU inference time (seconds) | Num. layers per block | Trainable parameters (k) | Latency (M FLOPs) |
|:---:|:---:|:---:|:---:|:---:|:---:|
| CIFAR-100 (IT=0.01) | 15.41 ± 2.34 | 3.52 ± 0.32 | 7.6 ± 1.7 | 269.72 ± 48.42 | 105.05 ± 19.23 |
| CIFAR-100 (IT=0.005) | 22.10 ± 1.94 | 5.99 ± 0.70 | 11.0 ± 1.2 | 402.68 ± 41.89 | 156.31 ± 15.87 |
| Dataset | Validation set acc. (%) | Validation set loss | Test set acc. (%) | Test set loss |
|:---:|:---:|:---:|:---:|:---:|
| CIFAR-100 (IT=0.01) | 68.09 ± 0.33 | 1.16 ± 0.02 | 72.43 ± 0.60 | 1.05 ± 0.05 |
| CIFAR-100 (IT=0.005) | 68.54 ± 0.78 | 1.12 ± 0.02 | 73.50 ± 0.61 | 0.98 ± 0.01 |
* We are currently running tests on ImageNet1k. These will take some time: each epoch lasts around 8-12 minutes, so a full run of DensEMANN will take at least half a month. We can run various tests in parallel though, but our computing resources are limited to what is described in the paper (MSi GT76, MSi GT75, internal cluster).
### Concerning other problems and application fields:
* We are planning to try DensEMANN on the ESC-50 audio dataset. The data will be turned into spectrograms, so CNN like DenseNet can still be applied to infer the labels.
* Similarly, we may use DensEMANN on other data that can be represented in a spectrogram-like manner, such as the electromyography wave signals in NinaPro DB5.
* We are also planning to try DensEMANN on object detection or image segmentation problems. We may still use DenseNet for this, but since the output channels and accuracy measures for these problems are different to those for classification this will take some more time to code. We may also develop a new version of the algorithm that grows architectures specifically designed for object detection (see below).
### Concerning other NN architectures:
* This is one of our main planned research routes for future work.
* We wish to try (Dens)EMANN-like approaches for the following kinds of NN:
* U-Nets (for image segmentation).
* YOLO v4-v6 like architectures (for object detection).
* Generative adversarial networks (GANs).
* Recurrent neural networks.
* Transformer networks.
* One or various of the above research routes will be followed in a future paper.
### Comparison with the state of the art:
At the request of one of the reviewers, in the enclosed PDF we provide a new scatter plot that compares DensEMANN to the NAS algorithms in Table 7, this time in terms of error rate on CIFAR-10 vs. algorithm execution time in GPU days.
Pdf: /pdf/c725bced38a39c15f9450785f2fcbd5782c63cea.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposed a new and improved algorithm to grow small DenseNet architecture from scratch while simultaneously training them from target data.
Strengths: 1. The paper is very clear and readable.
2. The evaluation is comprehensive and detailed, demonstrating the effectiveness.
Weaknesses: 1. The novelty is limited. The algorithm is backboned on a well-known algorithm, and the change to it is limited.
2. The scope of this algorithm is limited too.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Please analyze what improvement can be brought by the differences in section 3.2.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 1 poor
Limitations: Can this algorithm is adapted to other application fields?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Please analyze what improvement can be brought by the differences in section 3.2.
1. Changes to the macro-algorithm:
* (a) The macro-algorithm's last layer addition can always be removed because it is in fact always useless–at least from an accuracy improvement point of view. Indeed, the macro-algorithm only keeps adding layers if the latest layer addition has caused a significant change in the NN's accuracy. Otherwise, it stops. Keeping the last layer addition, which didn't change accuracy significantly, would go against DensEMANN's core philosophy of keeping those architecture elements that are necessary for a significant accuracy improvement (cfr. the abstract).
* (b) In the original DensEMANN paper (García-Díaz and Bersini 2021, Section IV.B), various observations were used for arguing against setting the improvement threshold (IT) to 0.005:
1. With IT = 0.005, the last few layer additions often only brought a very gradual increase in accuracy (or no increase at all), at the cost of more trainable parameters and a deeper architecture.
2. In these last few layers, the micro-algorithm pruned many of the generated filters, suggesting that the limited accuracy gains with these layers are only due to a few of their components.
For this reason, we decided to increase the default IT value to 0.01. Nevertheless, acknowledging that the observations in (García-Díaz and Bersini 2021) were limited (only 2 tests were performed for each dataset, and one of the tests crashed before completion), we tested an IT value of 0.005 again for the CIFAR-100 benchmark (see global rebuttal for results).
2. Changes to the micro-algorithm:
* (a) As explained in (García-Díaz and Bersini 2021, Section IV.A), when the "useful" and "useless" filter categories are made completely independent from the "settled" category, an undesirable phenomenon takes place: during the micro-algorithm's improvement stage, many new filters are detected as "useful" at the moment of their addition... but later become "useless" once they settle.
Consequently, due to the micro-algorithm's behaviour during the improvement stage (when a filter becomes useful a new filter is added), a great number of superfluous filters are added one-by-one very quickly. The result is an extremely overparametrized layer, which takes lots of time and computation resources to train. Afterwards, when these superfluous filters start settling and many of them turn out useless, they are pruned en-masse in the next pruning stage.
In conclusion, if non-settled filters can be counted as "useful", then multiple superfluous filter additions take place during the micro-algorithm's improvement stage, causing the NN's last layer–and DensEMANN's time and computation cost–to grow needlessly during this stage.
* (b) The smaller default value for the patience parameter (PP) was motivated by Fig. 5 of the original DensEMANN paper (García-Díaz and Bersini 2021, Section III.B). This figure shows the evolution of the kCS values for filters in minimal DenseNets, when these are trained on the CIFAR-10 dataset. It shows that, if a filter is created in mid-training, it takes around 40 training epochs (with a constant learning rate) for that filter's kCS to acquire a relatively stable value, i.e., to "settle" on a learnt operation. This change was also motivated by the quick rate at which other NAS algorithms similar to DensEMANN perform growing operations. For instance, NASH (Elsken et al. 2017) modifies the NN every 17 epochs.
* (c) In (García-Díaz and Bersini 2021), with PP=300, the main goal of the patience countdown was to impose a maximum period of time during which growing operations were allowed. In this paper, with a smaller PP value of 40, if growing operations are limited by the countdown then the algorithm may miss on important improvement opportunities. The goal of the countdown is thus different: it imposes a minimum training period before the pruning stage.
* (d) Observations in (García-Díaz and Bersini 2021, see Fig. 5) and our own preliminary tests showed that the kCS of settled filters does not actually remain constant, but in fact decreases slowly over time. After trying the original paper's DensEMANN configuration, we discovered that very long recovery stages often caused a very harsh pruning of the layer's filters, after which it was near-impossible for the NN to recover its prepruning accuracy. If the NN did manage to recover its accuracy, it was only after a very long recovery stage, which caused the vicious cycle to repeat. We thus had to implement a series of mechanisms to avoid long recovery stages and their negative effects.
3. Dense block replication: this feature was motivated by cell-based NAS approaches in other algorithms, such as NASNet (Zoph et al. 2018), ENAS (Pham et al. 2018) and DARTS (Liu et al. 2019). Our main aim was to verify if a significant increase in accuracy could be achieved by merely replicating the generated dense-block a user-defined number of times. The experiment was successful (there indeed was an increase in accuracy), and we plan to further study the implications of this in future work. In particular, due to the known limitations of purely cell-based approaches (see Fair DARTS, Chu et al. 2020) we are currently interested in a hierarchical NAS approach that searches for the right macro-structure along which to copy the dense blocks.
> Can this algorithm is adapted to other application fields?
We are currently working on this as our main priority. We are testing DensEMANN on multiple classification datasets from different application fields, and extending the algorithm to non-classification-based tasks (e.g. object detection and image segmentation). See the global rebuttal.
---
Rebuttal Comment 1.1:
Title: To Reviewer o4mm: Please respond to the author rebuttal
Comment: Dear Reviewer o4mm,
The deadline for author discussion period is approaching soon. Please respond to the author's rebuttal and indicate whether your concerns have been addressed. Thank you!
-AC
---
Rebuttal Comment 1.2:
Comment: Thank you for your reply! I think the paper writing can get improved. Maybe another storyline that emphasizes your idea is much better.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer o4mm,
Tank you very much for your reply!
I completely agree with it. This is a usual problem with my writing: I don't know how to "sell" my ideas well.
Other people have already pointed it out to me, and I will make my best to improve on my storytelling in the future. | null | null | null | null | null | null |
UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models | Accept (poster) | Summary: This paper propose a predictor-corrector method to accelerate the diffusion sampling process, where a corrector is proposed to correct the initial estimation of x_t using previous and current points. The experiments are conducted on imagenet and cifar10 both of which outperform existing efficient sampler at very few sampling steps.
Strengths: 1. Improving existing sampling method by a unified predictor-corrector solver is reasonable.
2. Adequate theoretical and empirical analysis of the proposed methods.
3. The paper is well organized and easy to understand.
Weaknesses: N/A
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work, especially the appreciation of our newly proposed unified framework UniPC, our adequate theoretical and empirical analysis, and our superior performance. We hope our work can open a new avenue for improving the sampling quality in the few-step sampling scenario via a predictor-corrector paradigm and thus promote the application of AIGC. | Summary: This paper develops a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, a unified predictor-corrector framework called UniPC for the fast sampling of DPMs has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps.
Strengths: On both unconditional and conditional sampling using pixel-space and latent-space DPMs. UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256 256 (conditional) with only 10 function evaluations.
Weaknesses: They claim many times on unified or model-agnostic but no experiments support their claim, the method only tested on DPM. Also the writing make me lost many times when reading the paper, I suggest you rewrite your motivation part by which technical issue in DPM you try to solve, why your method works. The theory on faster convergence property is also confused on accuracy, is it really useful? how to theoretically guarantee on the accuracy.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: They claim many times on unified or model-agnostic but no experiments support their claim, the method only tested on DPM. Also the writing make me lost many times when reading the paper, I suggest you rewrite your motivation part by which technical issue in DPM you try to solve, why your method works. The theory on faster convergence property is also confused on accuracy, is it really useful? how to theoretically guarantee on the accuracy.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About model-agnostic**
**[Reply]** Sorry for the confusion. By “model-agnostic” we mean that (1): our UniC can be applied after any existing solver, which is proved in Table 2; (2): like other training-free samplers, our method can be used to sample from any off-the-shelf DPMs, (pixel/latent space DPMs of different resolutions, see Figure 2,3). We will clarify this in the revised paper.
**Q2: About the motivation**
**[Reply]** Thanks for your suggestion. Our main motivation is that existing solvers for DPM often suffer from large accumulative error when sampling with <10 NFEs, due to the large step size. To overcome this issue, we propose to use a corrector than re-use the current point to reduce the error and increase the order of accuracy (which are proved theoretically and empirically). We will add the above discussion to make our motivation clearer.
**Q3: About the accuracy**
**[Reply]** The theoretical accuracy (i.e., order of accuracy/convergence order) is guaranteed by the Theorem 3.1 (which is proved in Appendix E.3). We have also conducted extensive experiments to further prove that our method can indeed improve the sampling quality of the DPMs. We will also improve our writing to highlight the novelty and usefulness of our newly proposed UniPC. | Summary: This paper presents a universal predictor-corrector method for faster sampling of diffusion models, with model retraining. The key idea is to further include the current point along with previous $p$ points while estimating the data point by adding a correction step. It shows that this model can achieve an order of accuracy $p+1$. Experiments on several image datasets show that this proposed method achieves strong performance in comparison to prior SOTA methods.
Strengths: The paper is generally written clearly
The problem of accelerating diffusion model is critical to solve
Reasonably good performance achieved
Weaknesses: **Limited novelty and design**:
- The idea proposed here is marginal in the sense that at each step, the current estimate is additionally used along with some previous points (the latte is the same as [25,34,40]); Given the sequential nature, the current point is actually already used by previous methods [25,34,40] in the next steps (with a step delay), whilst it is just that this point is used for more times. No extra information is used whilst also no clear understand on why this extra use of the current point can lead to increase in the order of accuracy.
- Besides, in general, there are two families of diffusion models: DDPM (e.g., [A, 34]) and SGM (e.g., [35]). And the DDPM series are often easier to speed up due to adopting a variance preserving (VP) process, in contrast to SGM’s variance exploding (VE) process. It is not clear which family this proposed method is focused on, or both?
- It looks the step of 10 is a milestone. Is there any relationship or theoretical implication of this proposed method on tackling this issue?
- Also, the addition operations introduce some complex parameter, which adds further burden to the model tuning process. It is unclear if these parameters are general for different datasets.
**Limited results gain**
- Whilst the numeral results such as FID looks good in comparison, when it comes to the visual examples in supplementary, the generated images are of similar quality as previous methods such as DPM-Solver++. It is known that numerical metrics are some limited in the visual perception evaluation. Overall, the generation performance gain is some limited, not as convincing as what claimed.
- It is some inconsistent when evaluating UniC/UniP/UniPC. How is this selection done?
- If I am not missing, there is no exact evaluation on the benefit of using the current point in estimation, which is the key design of the whole model.
**References**:
- [A] Jonathan et al. Denoising Diffusion Probabilistic Models. NeurIPS 2020
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: Please see the weaknesses above
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We address the questions and clarify the issues accordingly as described below.
**Q1: About the novelty and design**
**[Reply]**
- As discussed in Section 3.1 and proved in Appendix E.3, our UniC can indeed increase the order of accuracy of the sampling procedure. We would like to clarify that our usage of the current point is better than [25, 34, 40] because we reduce the error of the current point via a corrector step, while in previous methods there are more accumulative errors. We will add more discussion about how our method works in the revised paper.
- Thanks for your advice. Our method is designed for the variance preserving (VP) diffusion models, similar to DPM-Solver. VP diffusion models are also more useful in practice, such as the Stable-Diffusion, DeepFloyd-IF, etc. We will clarify this in our paper.
- We think that the proposed corrector (UniC) is the main reason why our method works better than previous methods when NFE<10. Existing methods would suffer from large accumulative errors with large step sizes. However, our method can mitigate this issue because we can obtain a better estimation for the current point due to the extra corrector step. Please also refer to Table.2, where we show that UniC can boost the performance of a variety of existing solvers.
- Sorry for the confusion. Our method is totally training-free, containing no extra learnable parameters. Our method can serve as a drop-in replacement of existing samplers of diffusion models and accelerate the sampling process.
**Q2: About the results gain**
**[Reply]**
- We would need to focus more on the detailed structure of the images when comparing the qualitative results. For example in Figure 4, it can be easily found that our UniPC generates more realistic images than DPM-Solver++ (some blurry or broken regions can be easily observed in the images generated by DPM-Solver++). We also encourage the reviewer to have a look at the results of sampling from a larger model _**Stable-Diffusion-XL**_ in the attached one-page PDF, where we show that UniPC can generate more realistic images than the baseline method.
- For most of the experiments, we use the UniPC (which is the combination of UniP and UniC) to compare with other methods. In the ablation study of UniC (see Table 2), we directly add the UniC after a variety of existing solvers demonstrate the UniC can be a plug-and-play module to improve sampling quality.
- Please refer to Table 2, where we clearly show that using the current point in the corrector step can consistently boost performance.
---
Rebuttal 2:
Title: Could you look at the authors' response?
Comment: Dear reviewer:
Please look at the authors' response and other reviewer' comments. Thanks! | Summary: In this paper, the authors present a novel sampling solver called UniPC for diffusion models. UniPC consists of two parts: UniC and UniP, where UniC corrects the estimation with prediction of current timestep and it can be applied other sampling solvers and UniP is a special case of UniC and share the similar form for sampling. Compared with previous methods(e.g. DPMSolver), empirical evaluation have verified the effectiveness of UniPC in generating better results with less sampling steps.
Strengths: - The proposed method can significantly reduce the number of sampling steps to less than 10, without losing the quality of the results.
- The proposed method has some generality and can further improve the approximate accuracy of other existing DPM samplers.
Weaknesses: - Experiments. Please conduct experiments on datasets with larger resolution images to measure inference time, quality, and diversity.
- Recent work: Please add more details on the related wok about DPM solver and some related ODE-based sampling solver.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weakness.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About experiments on larger resolution images**
**[Reply]** Thanks for your advice. In our original paper, we have already conducted experiments on $512\times 512$ images (sampling from stable-diffusion). Here we compare our method and the baseline DPM-Solver++ using a larger model _**Stable-Diffusion-XL**_, which can produce $1024\times 1024$ images. We randomly select 200 captions and generate the samples using UniPC and DPM-Solver++ implemented in the diffusers library. Due to the larger resolution, we evaluate both methods with NFE=15, and the results are listed in the following table (the evaluation protocol is the same as the original paper):
|Method|Quality: $\ell_2$-Dist ($\downarrow$)|Diversity: IS($\uparrow$)|Inference Time (s)|
|------|----------|----------|---------|
|DPM-Solver++|0.741|8.494|4.33$\pm$0.02|
|UniPC|0.669|8.909|4.26$\pm$0.01|
Our results show that UniPC achieves better performance in all three metrics, indicating that UniPC can also be a good choice when sampling from a large diffusion model like Stable-Diffusion-XL. We also provide some qualitative results in the attached one-page PDF, where we show that the images generated by UniPC are more realistic and have much fewer artifacts compared with DPM-Solver++.
**Q2: About the recent work**
**[Reply]** Thanks for the suggestion. We will elaborate on the details of the related work and below is another paragraph we plan to add to Section 2.2 (due to a display issue of OpenReview, we cannot type too many formulae here, but we will include more details in the revised paper):
Based on the exponential integrator, [25] proposes to approximate $\hat{\epsilon}_{\theta}$ via Taylor expansion and views DDIM as DPM-Solver-1. [26] considers the data prediction scheme by rewriting (2) and demonstrates its effectiveness in conditional sampling. [40] derives the Taylor expansion formulae with respect to t instead of the half log-SNR. [24] employs pseudo-numerical methods such as the Runge-Kutta method. Although many aforementioned high-order solvers are proposed, existing solvers of diffusion ODEs can be explicitly computed for orders not greater than 3, due to the lack of analytical forms. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for the positive feedback and valuable comments on our work. As suggested by Reviewer vcSP, we further compare the sampling quality of our method UniPC and the baseline DPM-Solver++ using _**Stable-Diffusion-XL**_, a newly released model which can generate $1024\times 1024$ images. We highly encourage the reviewers to have a look at the attached one-page PDF for the qualitative results, where it can be found that our method consistently generates more realistic images with fewer visual flaws compared with DPM-Solver++.
Pdf: /pdf/3acac6c12851831331f8459c1b5deac060863c65.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper proposes an ODE sampler for diffusion probabilistic models
(DPM), exploiting the structure of exponential integrators. The paper
claims the proposed sampler can use any existing DPM and achieve high
sampling quality with very few (<10) number of function evaluations
(NFE), and also improve upon related methods when using arbitrary
NFEs.
Strengths: - The proposed method is well justified theoretically and the
experiments validate the proposed approach.
- The paper clearly states the connections between the proposed method
and related existing solvers.
- The experimental results include both pixel-space and
latent-space diffusion and show consistent improvements when applying
the method on top of existing solvers.
- The method performance is superior to the baselines in both low and
middle NFE regimes.
Weaknesses: 1) It would be nice if the paper included the statistical significance
of the FID scores reported in Tables 1-6. In the ~10 NFE regime
sometimes these are very close to the baseline, while for lower
quality low NFE one could expect the FID to be not very
informative. I plan to revisit my decision after considering the
statitstical significance of these results.
2) The paper repeatedly claims being able to achive higher order p
than existing solvers. However I wonder how important it is in
practice to achieve order p>3. Table 4 shows when including higher
orders the results become poorer. And also I'd be curious to see
this study for the higher quality NFE regimes (eg NFE=10).
3) Minor:
- L. 39 "output of the model output"
- L. 68-69 extra subscript 0 on lhs?
- L. 71 "obtained"
- L. 97 (after period) Can this be rephrased in a less subjective manner?
- (5) I think $\hat{\epsilon}_\theta^{(k)}$ is not defined. Shouldn't
the argument in this tem vary with $k$?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: See weaknesses.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About the statistical significance**
**[Reply]** Thanks for the advice. We consider the following hypothesis testing problem:
$$
H_0: M_{\rm DPM-Solver++}\le M_{\rm UniPC} \quad \text{versus} \quad H_1: M_{\rm DPM-Solver++}> M_{\rm UniPC}
$$
where $M$ is the metric of interest, and a lower value of $M$ implies better performance. We derive the statistical significance by running our method and the baseline method for independent trials on different datasets and conducting two-sample t-tests for the aforementioned hypothesis testing problem. For CIFAR10, FFHQ, and LSUN, the metric $M$ is FID50K, and we compute the results of 10 independent runs. For MS-COCO2014, the metric $M$ is the $\ell_2$ distance between the generated latent codes and the ground truth (obtained by a 999-step DDIM), which is calculated by randomly selecting 200 captions as conditions. Note that for both the two metrics (FID50K and $\ell_2$-Dist), the lower is the better.
The p-values of the two-sample t-tests on different datasets are presented in the following table (where we use NFE=8,10 as examples):
|Dataset \ $p$-value|NFE=8|NFE=10|
|---------|---------|---------|
|CIFAR10|$1.09\times 10^{-3}$|$1.46\times 10^{-3}$|
|FFHQ|$3.01\times 10^{-15}$|$3.66\times 10^{-12}$|
|LSUN Bedroom|$4.72\times 10^{-4}$| $4.53\times 10^{-5}$|
|MS-COCO2014|$7.04\times 10^{-5}$|$1.25\times 10^{-5}$|
We can find that all the $p$-values are far smaller than $0.01$, which means that we can reject the null hypothesis of no improvement at the significance of $0.01$. In other words, UniPC performs significantly better than DPM-Solver++.
**Q2: About the higher order**
**[Reply]** Thanks for pointing out this. We prove theoretically that higher-order solvers enjoy better accuracy in solving a diffusion ODE. In addition, our method offers a unified framework to empirically investigate the performance of higher-order solvers. From our empirical study, the schemes '123432' for NFE 6 and '1223334' for NFE 7 outperform the low-order schemes with $p \leq 3$ and all the existing solvers on CIFAR10. Our results also show that simply increasing the order by introducing more _**previous points**_ might not be beneficial to performance. We speculate that this is because the model output of previous points might be more inaccurate and thus would affect the subsequent sampling steps. On the other hand, our UniC increases the convergence order by re-using the _**current point**_, which can consistently improve the sampling quality over the baseline methods (see Table 2). Here we provide some results of different order schedules when NFE=10 on CIFAR10:
|schedule|1223433321|1233343321|1234544321|
|------------|-----------------|----------|------------|
|FID$\downarrow$|4.07 |4.14|4.76|
|schedule|1234554322|1234565432|1234444443|
|------------|-----------------|----------|------------|
|FID$\downarrow$|5.41|18.23|6.84|
**Q3: About the minor issues**
**[Reply]** Thanks for your careful reading. We will modify these as follows:
- L. 39: "output of the model output" $\rightarrow$ "model output"
- L. 68-69: We use $q_{t0}$ to represent the transition probability from $x_0$ to $x_t$. We will change it into $q_{t|0}$ for better readability.
- L. 71: "obtain" $\rightarrow$ "obtained"
- L. 97: We will change this sentence into: "Despite the rapid development of fast samplers, the quality in few-step sampling still has room for improvement. "
- (5): The superscript $k$ denotes the $k$-th derivative of $\hat{\epsilon}_\theta$. We will add the notation in the revised paper.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal and the stastistical significance test.
Comment: The authors have satisfactorily addressed my concerns in their rebuttal.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thanks a lot for the response. We are glad to hear that our rebuttal has satisfactorily addressed your concerns. Would you please consider raising your score? | null | null | null | null | null | null |
Boosting Verification of Deep Reinforcement Learning via Piece-Wise Linear Decision Neural Networks | Accept (poster) | Summary: This paper proposes an inverse transform-then-train approach for verifying deep reinforcement learning systems. It encodes a DNN into efficiently verifiable linear control policies and optimizes them via reinforcement learning. The approach is compatible with existing DRL training algorithms and shows that PLDNN-based systems can be more efficiently and tightly verified, with up to 438 times speedup and a significant reduction in overestimation.
Strengths: + The paper explores an interesting and important direction
+ Innovative and practical methodology
Weaknesses: - Some technical parts are unclear
- The way of presenting contributions of the work is somewhat misleading
- The limitation of using PLDNN is not discussed
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: ##
## Originality
To efficiently verify deep reinforcement learning systems, this paper proposes an inverse transform-then-train technique. It efficiently converts a DNN into verifiable linear control policies, which it then uses reinforcement learning to improve. The method is compatible with current DRL training algorithms and demonstrates that PLDNN-based systems can be more effectively and tightly verified, with speedups of up to 438 times and a notable decrease in overestimation.
##
## Importance of contribution
The idea of using PLDNN to boost the verification of DRL is interesting. However, it seems like it converts the DNN into a different structure before training, and the PLDNN is different from the original DNN. To me, it is more like a synthesis technique, rather call it as verification. Moreover, the authors claim that it is compatible with most existing DRL training algorithms, but in the evaluation, the details of the training algorithms are not shown.
##
## Soundness
This paper proposes an inverse transform-then-train approach for verifying deep reinforcement learning systems, overcoming inaccuracies and scalability issues. It uses piece-wise linear decision neural networks (PLDNNs) for efficient verification, reducing overestimation. However, there are some technical parts that are not so clear and need further clarification.
Page 1, Line 21
> Most existing approaches [7–10] over-approximate both embedded DNNs and non-linear environment dynamics to build verifiable models, which inevitably introduces dual overestimation.
I am not sure what this dual-estimation is. The sentence following seems only talk about the disadvantage of over-approximation.
Page 2, Line 40
> ..., we propose a novel, inverse transform-then-train approach:
It seems like the approach cannot verify a trained DRL agent. So it is more like a synthesis technique.
Page 5, Line 197
> Consequently, we can extract a piecewise linear decision function with this structure of π on each abstract state.
How the piecewise linear functions are extracted? Could you explain it more detailedly?
##
## Evaluation
The authors try to evaluate whether PLDNN-based training offers reduced partitions, comparable rewards, robustness, time overhead, high verification performance, and scalability for large-sized neural networks and complex systems with high-dimensional state space. In Section 5.2, the authors compare Lincon with polar and verisig 2.0. Have you compared the efficiency of these tools on the original DNNs? Since, maybe the architecture of PLDNN is not easy to be handled for Polar and Verisig 2.0.
##
## Quality of presentation
The paper is not that easy to follow. I would recommend the authors add more motivations for the design of each single component.
##
## Comparison with Related Work
The PLDNN structure is similar to the DNN with abstract states listed below. What are the main difference and advantages of using PLDNN?
- Jin, Peng, et al. "Trainify: a CEGAR-driven training and verification framework for safe deep reinforcement learning."
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: No, the authors did not discuss the limitation of the work. I think more attention should be given to the disadvantages of using PLDNN, like is it still the same original network?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Question for the importance of contribution:** (i) It is more like a
synthesis technique, rather call it verification. (ii) Algorithm
compatibility of PLDNN.
**Response:** (i) Yes, it can be understood as a synthesis method for
developing verification-friendly models. We show that the powerful
fitting capability of DNN can be leveraged to train easy-to-verify
PLDNNs. One novelty of our work is that we use DNN as a backbone to
implement policy synthesis. (ii) Our approach is compatible with most
existing DRL training algorithms because we do not change the input and
output structure of the decision network. Namely, the differences with
the original DNN are invisible to the DRL algorithms.
**Question for Page 1, Line 21:** What this dual-estimation is?
**Response:** The first overestimation is introduced when dealing with
the DNN controller. For the traditional DNN, it cannot be expressed in a
known closed-form expression. Consequently, most approaches resort to
computing a conservative model such as a Taylor model that encloses the
output of DNN given an input set, which is the first overestimation for
the decision neural network. The second overestimation is introduced to
obtain an overestimation model for the solution of ODE system dynamics.
That is unavoidable as most nonlinear ODEs do not have closed-form
solutions. Using PLDNN as the decision neural network, we can eliminate
the first overestimation as PLDNN can be expressed in a known
closed-form (i.e., piece-wise linear function).
**Question for Page 2, Line 40:** It seems like the approach cannot
verify a trained DRL agent. So it is more like a synthesis technique.
**Response:** Yes, the traditional DNN-based agent cannot be verified
directly in our approach. Our insight is that training/synthesizing
verification-friendly models is more practical to develop certified DRL
systems than directly verifying the complex canonically trained DNNs. We
demonstrate this in the paper.
**Question for Page 5, Line 197:** How the piecewise linear
functions are extracted?
**Response:** The extraction is straightforward. Due to the special
design of PLDNNs (Figure 2 in our submission), we can feed an arbitrary
state in each partition into the embedded coefficient network $\pi_c$
and it will output the linear coefficients of the corresponding linear
controller defined on each partition. As partitions are finite, we can
traverse all the partitions and obtain the extracted linear controllers
that are equivalent to the PLDNN.
**Question for Evaluation:** Have you compared the efficiency of
these tools on the original DNNs? Since, maybe the architecture of PLDNN
is not easy to be handled for Polar and Verisig 2.0.
**Response:** It is true that PLDNNs cannot be handled by Polar and
Verisig 2.0, but that is not our intention in this work. As we can
extract equivalent linear controllers from a PLDNN, we do not need to
cope with the network decision model for verification. Instead, we can
leverage the off-the-shelf hybrid system verification tools such as
Flow\* to verify PLDNN-based systems. To demonstrate the
verification-friendliness, we provide the verification results by PLDNN
and original DNNs in Table 2 in our submission, respectively. The
results show that DNN-based systems are difficult to verify even by
using state-of-the-art tools like Polar and Verisig 2.0 and hardware
acceleration. In contrast, PLDNN-based systems are more efficient to
verify even without hardware acceleration. Moreover, the verification
results are more precise as the DNN over-approximation is avoided.
**Question for Comparison with Related Work:** What are the main
differences and advantages of using PLDNN?
**Response:** There are two main advantages of our approach. First, we
train a linear controller on each partition while other approaches such
as Trainify [1] train a constant action. Consequently, the number of
partitions we need is far less than that needed by Trainify. Second,
with PLDNN, we can solve the verification problem by transforming the
PLDNN-based system into an equivalent hybrid model, which can be
verified using off-the-shelf tools such as Flow\*. However, Trainify
needs to build Kripke structures from the trained models, which, quite
often, suffers from the notorious state explosion problem due to the
exponential increase of partitions during training.
[1] P. Jin, J. Tian, D. Zhi, X. Wen, and M. Zhang, “Trainify: A cegar-driven training and verification framework for safe deep reinforcement learning,” in International Conference on Computer Aided Verification. Springer, 2022, pp. 193–218. 10
---
Rebuttal Comment 1.1:
Comment: Thank you for the answer. I would like to change the score to weak accept. | Summary: The paper presents an approach for designing neural network policies, in the context of deep reinforcement learning (DRL) with continuous state and action spaces, that are more amenable to verification compared to standard networks. A standard approach to verifying continuous DRL systems is to abstract the system after learning the policy and to then verify the abstracted system. In contrast, the proposed approach first abstracts the state space (such that the state space is discretized) and then learns a linear policy for each abstract state. The coefficients of the linear policy are learnt using a coefficient network that maps each abstract state to its resulting policy coefficients. This mapping itself need not be linear and can be a complex neural network. However, since the state space is discretized and the policy for each abstract state is fixed and linear, tight and efficient verification of the resulting DRL system becomes feasible. Moreover, the state abstraction computation is itself encoded as the first layer of the coefficient network, so that the resulting policy network (referred to as piece-wise linear decision neural networks or PLDNNs) has a standard input (system state) and an output (actions), and can be directly trained using off-the-shelf DRL algorithms. The empirical evaluation shows that the performance of PLDNNs, measured in terms of cumulative reward and robustness of the policy, is comparable to standard policy networks. At the same time, DRL systems with PLDNN policies are much more amenable to verification.
Strengths: Even when the system dynamics are known, formal verification of safety and liveness properties of DRL systems is challenging to scale, specially for continuous state and action spaces. The paper proposes a very interesting idea for architecting policy networks that are easier to verify. I believe that the proposed notion of abstracting the state space before learning the policy will be a fruitful research direction. I also find the observation that a small set of linear policies can perform as well as a complicated non-linear policy to be surprising and interesting in its own right. I should add the disclaimer that I am not an expert on the topic of reinforcement learning and do not have a good sense of the broader literature on the topic.
Weaknesses: My main concern is about the "scalability of verification vs cumulative reward" tradeoff for DRL systems with high-dimensional state spaces (for instance, when the inputs to the policy network are images). As the state space becomes more complex, I suspect that a finer-grained partition becomes necessary to learn a good policy, directly impacting the scalability of verification. The presented evaluation considers system states with up to 12 dimensions, so it is hard to make an assessment about PLDNNs in settings where inputs are images required hundreds to thousands of dimensions.
I am also curious about how the proposed approach would be used when the action space is discrete. Would each abstract state be mapped to a fixed action? Some discussion about this setting could be interesting and helpful. Finally, I have minor concerns about the empirical evaluation (described in the Questions section below).
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: I have some clarification questions about the evaluation.
1. What were the model architectures used for the results reported in Section 5.1? Are these the same architectures as in Section 5.2?
2. Are the architectures for PLDNN and DNN the same (except for the first layer) for the results in Section 5.1 and Section 5.2? If not, how were the DNN architectures chosen and how do we ensure that it is an apples-to-apples comparison?
3. Is the time reported in Table 1 in seconds? And is it the total training time over all episodes?
4. How is the reward for the robustness experiments calculated, i.e., what is the horizon length? More generally, what is the reward function? The appendix gives some details; it says that for the 6 regular benchmarks "we set a negative reward when the agent is not in the goal region. Once the agent reaches the goal region, it will be awarded a positive reward". How and what is the positive reward assigned? Why is the final cumulative reward so different across the 6 regular benchmarks (based on Fig 5 and 7) if their reward functions are so similar?
5. Are the models used for the robustness experiment the same as the ones used for the cumulative reward experiment? If yes, then why do you say that 10 different policies are trained for the former while you conduct five trials for the latter? My understanding is that the models used for the robustness experiment are trained on non-perturbed data but the evaluation is conducted with perturbed data. If this is so, then the same models can and should be used for both sets of experiments.
6. What are precisely the properties of the DRL systems being verified? Based on Table 3 in the appendix, my guess is that for B1-B5, Tora, and QUAD the verified property is a liveness property whereas for CartPole it is a safety property? Can you please confirm? Is the time horizon for properties bounded or unbounded? It would be helpful to include this detail in the paper as well.
Some more general questions:
1. Can you comment on the applicability of this approach in settings where the inputs are high-dimensional images?
2. Can the approach be adapted to settings where the action space is discrete?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 4 excellent
Limitations: The paper does not necessarily discuss the limitations and might benefit from a small discussion about the same.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses 1:** (i) It is hard to make an assessment about PLDNNs in
settings where inputs are images. (ii) How the proposed approach would
be used when the action space is discrete?
**Response:** (i) Yes, applying PLDNNs when inputs are images is
difficult due to the high dimensionality. At present, we are only
focused on state-based cases. One potential solution to adapting PLDNNs
to images is first to extract features from images and train PLDNNs on
the extracted features. We believe that this is an interesting research
direction to follow.
\(ii\) For discrete action space, please refer to our response to
**General Question 2** below.
**Question 1:** What were the model architectures used for the
results reported in Section 5.1? Are these the same architectures as in
Section 5.2?
**Response:** We use the larger network structure with Tanh activation
function (e.g. Tanh$_{3 \times 100}$), as shown in Table 2 in our submission, to conduct
the training and robustness evaluation.
**Question 2:** Are the architectures for PLDNN and DNN the same
(except for the first layer) for the results in Section 5.1 and Section
5.2?
**Response:** Yes, they are almost the same. Precisely, they have the
same architectures except that PLDNN has an abstraction layer and an
additional output layer for outputing linear coefficients. The hidden
layers of PLDNN and DNN are exactly the same in our comparison
experiments.
**Question 3:** Is the time reported in Table 1 in seconds? And is
it the total training time over all episodes?
**Response:** Yes, the time reported in Table 1 in our submission is in seconds, and it is
the total training time over all episodes.
**Question 4:** How is the reward for the robustness experiments
calculated, i.e., what is the horizon length? reward function? Why is
the final cumulative reward so different across the 6 regular
benchmarks?
**Response:** The horizon length in CartPole is set to 200. In the other
benchmarks, the settings of horizon length depend on the number of
control steps needed to reach the goal region. For example, in B1, it
takes about 30 control steps to reach the goal region after the training
phase. In B2, the agent needs about 15 control steps to reach the goal
region. Based on this, we set the horizon length slightly larger than
the control steps, e.g. 35 for B1 and 20 for B2.
For the reward function, the positive reward when the agent reaches the
goal region is set to a constant such as 100. The negative reward
function is set according to the distance between the agent and the
center of the goal region. For instance, since the goal region of B1 is
$x_1 \in [0,0.2], x_2 \in [0.05,0.3]$, we set the negative reward
function as $-|x_1 -0.1| - |x_2 - 0.2|$. Hence, to obtain a higher
cumulative reward, the agent needs to reach the goal region as soon as
possible.
As different control steps are needed to reach the goal region in each
benchmark, the cumulative reward is different across the six regular
benchmarks even if their reward functions are similar.
**Question 5:** Are the models used for the robustness experiment
the same as the ones used for the cumulative reward experiment?
**Response:** Yes, the five models used in the reward evaluation are the
same as for the robustness evaluation. However, as the robustness
assessment is based on perturbed data, which may cause uncertainties, we
have therefore performed five additional trials to produce more precise
averaged robustness evaluation results.
**Question 6:** (i) What are the properties being verified? (ii) Is
the time horizon for properties bounded or unbounded?
**Response:** (i) For B1-B5, Tora, and QUAD, we verify a liveness
property that checks whether the agent can reach the goal region within
some bounded control steps. For CartPole, we verify a safety property:
whether the agent can stay in the safe region within some specific
control steps. (ii) Yes, all the time horizons are bounded because our
verification is based on reachability analysis which only calculates the
overestimated sets of reachable states in finite steps, as required by
the off-the-shelf tool $\text{Flow}^*$.
**General Question 1:** Can you comment on the applicability of this
approach in settings where the inputs are high-dimensional images?
**Response:** Since PLDNN-based training only costs slightly more time
than traditional DNN-based training, we believe that PLDNNs can be
applied to the training phase when dealing with high-dimensional image
input such as image classification. That is, the piece-wise linear
function $\pi$ and softmax function $\mathit{softmax}$ are composed to
yield a function that outputs the probability of each category in the
form of $\mathit{softmax}(\pi(s))$. However, the verification cannot
directly handle the high-dimensional image input due to the extremely
high-dimensional input space. Another reason is that the dynamics of
pixels may not be definable. If we can extract some informative features
from the image input first, combining the verification technique may be
feasible.
**General Question 2:** Can the approach be adapted to settings
where the action space is discrete?
**Response:** Yes. One straightforward approach is to make each abstract
state perform the same action. Moreover, if we still want to maintain a
linear output on each abstract state, we only need to generate a mapping
between the continuous output and the discrete action space. For
example, given a discrete action space $\{1,-1\}$, when the output of
the decision neural network is greater than 0, the agent executes action
1; otherwise, it performs action -1:
$$a=
\begin{cases}
1 & \text{if }\pi(s)>0,\\\\
-1 & \text{otherwise.}
\end{cases}$$
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed response to my questions. I will keep my score. | Summary: The paper proposed a new neural network architecture, PLDNN, for better verification of DRL trained / neural-network controlled closed-loop controlled systems. PLDNN differs by abstracting the input space with intervals and applying linear mappings in each abstract states. The controller represented by PLDNN can be integrated with different DRL algorithms. The experimental results showed that the PLDNN can retain the performance by training with DRL algorithms directly and achieve tighter reachable sets with better verification efficiency.
Strengths: - The paper is sound and the topic of the paper is of high interest to the research community.
- The paper does a good job at introducing technical details and makes the paper easy to follow.
- The empirical study shows great improvement of the proposed method over existing reachability analysis methods for neural-network controlled systems.
Weaknesses: - My main concern of the paper is lack of analysis on the policy of reducing the partitions. Though the experimental results show strong results of PLDNN, it is not clear how partitioning plays a role in the performance improvement. It could be for some partitioned space in the state space, a linear control policy is almost close to the optimal control policy. Some other partitioned region may need finer partitions to approach the optimal control policy with a combination of linear control policies. However, similar insights or observations are not provided in the paper. Some analysis on how number of partitions evolves during training or a comparison between the linear policy of partition reduction and a baseline, like a fixed number of partition, would be great to have.
- The paper positioned PLDNN as a new transform-then-train approach. Training with PLDNN may result in some implicit benefits of finding near-optimal policy represented by PLDNN. But PLDNN and the partition schema are still applicable for the train-then-transform process, e.g., distilling a trained DNN to PLDNN. I would recommend authors to provide an empirical comparison between using PLDNN with transform-then-train and train-then-transform processes.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - Could you provide more details about the linear policy for reducing the partitions?
- Could you provide more details on how network are trained for benchmarks in the experimental section, e.g., training time? Are all network trained from scratch?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Some limitation discussion on training time cost would be great to have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses 1:** My main concern of the paper is lack of analysis on
the policy of reducing the partitions. Though the experimental results
show strong results of PLDNN, it is not clear how partitioning plays a
role in the performance improvement. Some analysis on how number of
partitions evolves during training or a comparison between the linear
policy of partition reduction and a baseline, like a fixed number of
partitions, would be great to have.
**Response:** From the perspective of function fitting, more partitions
tend to obtain a policy that fits the optimal decision function better,
which implies a better reward performance. In our experiment, we compare
with an approach which trains constant actions on partitioned regions
[1]. Under a similar performance constraint, the
partition reduction is significant by our approach (Table 1 in our submission). Using a
fixed number of partitions is a reasonable baseline which can also show the
effect of partition reduction. In our future work, besides the baseline with a fixed number of partitions, we will further
consider building another baseline, i.e., the minimal number of
partitions that are needed for training near-optimal piece-wise linear
controllers.
**Weaknesses 2:** I would recommend authors to provide
an empirical comparison between using PLDNN with transform-then-train
and train-then-transform processes.
**Response:** Thanks for your insightful suggestion!
Train-then-transform via PLDNN is indeed a direction that deserves
further study. We did not consider this as one of our intentions is to
demonstrate the feasibility of training verification-friendly and
near-optimal PLDNNs directly. We agree that PLDNNs are also achievable
via the train-then-transform approaches. Encouraged by the suggestion,
we carried out a quick experiment on the comparison. The results show
that the directly trained PLDNN exhibits similar performance (reward is
52.97) to a canonically trained DNN (reward is 52.45), while there is
a small decrease in performance (reward is 51.25) of the PLDNN
transformed from the trained DNN. See the attached PDF file and our
global response for more details. However, such a decrease is almost
negligible and more comprehensive experiments are required to draw fair,
conclusive results.
**Question 1:** Could you provide more details about the linear
policy for reducing the partitions?
**Response:** We compare the partition reduction of linear policy with
an approach [1] that trains a constant action on each
partition (Table 1 in our submission). For both approaches, we start from a coarse-grained
partition (i.e., one region) to train using the DDPG algorithm and
increase the number of partitions on each dimension until the preset
reward threshold is achieved. The linear policy relaxes the constraint
that each abstract state needs to correspond to a constant action. Our
evaluation shows that using linear policy, rather than a constant, can
significantly reduce the number of partitions under the same reward
threshold setting, e.g., from $25^4$ to $16$.
**Question 2:** Could you provide more details on how networks are
trained for benchmarks in the experimental section, e.g., training time?
Are all networks trained from scratch?
**Response:** We use DDPG as the training algorithm for all PLDNNs and
DNNs. In addition, all the networks are trained from scratch and the
corresponding average training time is reported in Table 1 in our submission (in seconds).
The setting of the reward functions is given in Appendix A.1 of our
accompanying technical report (submitted as supplementary material). We
use the larger network structure with the Tanh activation function
(Table 2 in our submission) to conduct the training and robustness evaluation.
[1] P. Jin, J. Tian, D. Zhi, X. Wen, and M. Zhang, “Trainify: A cegar-driven training
and verification framework for safe deep reinforcement learning,” in International
Conference on Computer Aided Verification. Springer, 2022, pp. 193–218.
10
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed response especially for the newly added analysis on train-then-transform approaches. I would encourage the authors to investigate how transform-then-train may benefit the reward compared with the train-then-transform in the future work. At the current stage, I would like to keep my original score. | Summary: This paper presents an approach towards more easily verifiable DRL agent. Instead of training a neural network and then applying verification tools to it, the paper proposes to partition the input state, train linear policies in each of the partition, and verify the resulting piecewise-linear policy as a hybrid automation in Flow*.
Strengths: - Verifying DRL systems has gained research attention in the past several years due to its supposed application in many safety/cost-critical scenarios. Currently, scalability is a major concern. Therefore, the topic of the paper is of interests to the ML and verification community.
- While most existing work verifies a neural network controller via input splitting + abstract interpretation, the paper proposes an interesting and novel alternative, which is to eagerly partition the input region before training and make sure that a linear policy is trained for each partition. Compared to the train-and-transform approach, the proposed approach guarantees linearity in each partition, which leads to faster verification empirically.
- The proposed method is relatively easy to implement in existing training framework. The engineering trick that inserts hand-crafted neural network layers to make sure input from the same partition will be multiplied with the same weights is clever.
Weaknesses: Conceptually, the method leverages the observation that a relatively small set of linear functions is sufficient to achieve comparable performance to a more complex neural network. This observation is itself rather surprising and deserve a closer study. For the same input region where the PLDNN is linear, is the behavior of the neural network also relatively linear? Are there input region where non-linearity is truly needed and PLDNN's behavior is problematic?
If low training loss can truly be obtained by a set of linear functions, it seems reasonable to expect that the behavior of the neural network in the same partition is close to linear as well, making abstract interpretation based techniques relatively precise. Following this thought, the performance gain in verification precision might be a construct of the baseline neural network being unnecessarily large. Have the authors tried whether smaller networks can result in similar performance?
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: How do you determine the number of segments along each dimension?
How is the system dynamic handled? Is it encoded precisely?
To my knowledge, Verisig 2.0 also uses Flow*. Is it fair to say that the main difference between LinCon and Verisig 2.0 is that the former verifies a PLDNN while the latter verifies a canonically trained neural network?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The paper does not explicitly discuss its current limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses 1:**\
(i) The observation that a relatively small set of linear functions is
sufficient to achieve comparable performance deserves a closer study.\
(ii) For the same input region where the PLDNN is linear, is the
behavior of the neural network also relatively linear?\
(iii) Are there input region where non-linearity is truly needed and
PLDNN's behavior is problematic?
**Response:** (i) We hold the same opinion and believe that this
observation would stimulate more studies in this research direction. In
fact, there have recently been several related works. For example,
in [1], a deterministic program with a small set of
linear functions (less than four) is learned as a safety shield. In
[2], the "if-then-else\" structured programs whose
depth of syntax tree is between two and five can fulfill different
control tasks. Hence, it is feasible to achieve good performance with a
relatively small set of linear functions.
\(ii\) The answer is yes if there is a unique optimal control policy.
This is because the PLDNN can be considered as an approximation of the
optimal decision neural network. However, if there can be multiple
optimal control policies, an agent may make different decisions even in
the same state with different policies, but all the different decision
sequences can achieve optimal performance globally. In this case, the
linearity of a PLDNN in the same input region does not necessarily imply
that the neural network has the same relatively linear behavior.
\(iii\) Yes, in theory, regions may exist where non-linearity behavior
is truly needed. However, a nonlinear function can be well-fitted by a
set of linear functions from the perspective of function fitting. The
widely-used ReLU neural networks are such an example, which are
essentially a piece-wise linear function as well (see
[3] for details). More partitions can be applied to
perform a similar non-linear decision.
**Weaknesses 2:** Can smaller networks result in similar
performance?
**Response:** Yes, that is possible in practice. However, whether
smaller networks result in similar performance depends on the complexity
of DRL control tasks, such as the system dynamics and the control
target. Our experimental results show that: for some simple cases such
as B1 in Table 2 in our submission, the smaller networks Tanh$(2 \times 20)$ and
ReLU$(2 \times 20)$ can achieve comparable performance. In contrast, in
CartPole with relatively complex system dynamics, a small neural network
such as Tanh$(2 \times 20)$ can not obtain high performance.
**Question 1:** How do you determine the number of segments along
each dimension?
**Response:** The number of partitions depends on the training reward of
the DRL control system. We start training from a small number of
partitions. If the preset reward threshold is not reached, meaning that
there may exist some regions that need a non-linear decision, we will
further divide the state space till the preset reward threshold can be
reached.
**Question 2:** How is the system dynamic handled? Is it encoded
precisely?
**Response:** Yes, the system dynamics $f$ is exactly encoded into the
flow component of a hybrid automaton in the form of ordinary
differential equations (ODEs)
$F(l_0): \{\dot{s} = f(s, a), \dot{a} = 0, \dot{t}_c = 1\}$. We then
use the tool Flow\* to compute conservative results that contain the
solution of the ODEs.
**Question 3:** Is it fair to say that the main difference between
LinCon and Verisig 2.0 is that the former verifies a PLDNN while the
latter verifies a canonically trained neural network?
**Response:** Yes, that is the main difference. Our insight is that
training verification-friendly controllers such as PLDNNs can be a good
alternative to developing certified DRL systems. As shown in our
experiments, the verification results by LinCon are very close to the
simulation results, while the results by Verisig 2.0 may fail the
verification (Table 2 in our submission) due to the over-approximation of canonically
trained neural networks.
[1] H. Zhu, Z. Xiong, S. Magill, and S. Jagannathan, “An inductive synthesis frame-
work for verifiable reinforcement learning,” in Proceedings of the 40th ACM SIG-
PLAN conference on programming language design and implementation, 2019, pp.
686–701.
[2] Y. Wang and H. Zhu, “Verification-guided programmatic controller synthesis,” in
International Conference on Tools and Algorithms for the Construction and Analysis
of Systems. Springer, 2023, pp. 229–250.
[3] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear re-
gions of deep neural networks,” Advances in neural information processing systems,
vol. 27, 2014.
9
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I would like to change my score to 6. | Rebuttal 1:
Rebuttal: ## Discussion on Limitations
We thank all the reviewers for the valuable feedback. We first briefly
discuss the main limitations of our method, as raised by all the
reviewers.
One limitation concerns a potential rapid increase in the number of
partitions when a preset reward threshold can never be reached. One
solution would be to locate and partition the regions with poor
performance. These regions that result in the failure of training can be
regarded as *counterexamples* and should be further divided. We plan to
leverage the counterexample-guided abstraction and refinement (CEGAR)
formal method to cope with this problem.
Another potential limitation is that the verification complexity may
still be high for piece-wise linear controllers, although we show in the
paper that they are more amenable and verification-friendly than neural
network controllers. At present, we employ off-the-shelf hybrid
verification tools [1] to demonstrate the effectiveness of
our approach. We are considering implementing dedicated algorithms to
improve the verification efficiency.
## Train-Then-Transform using PLDNN
Encouraged by Reviewer 63JN's suggestion, we have conducted a quick
experiment to compare the two approaches: transform-then-train and
train-then-transform. The experiment was conducted on B1. The results
are presented in Table 1 in our PDF file.
**Experimental Setting.** We use the same network structure
Tanh$_{2 \times 20}$ for training the following three decision networks
(i.e. DNN, PLDNN, and Distilled PLDNN). For DNN and PLDNN, the training
algorithm is DDPG. The distilled PLDNN is obtained using supervised
learning in which the training data is obtained through sampling from
the traces generated by the DNN controller.
In the train-then-transform approach, we first train a DNN controller
whose cumulative reward is about 52.45. Then we use supervised learning
to distill the DNN's policy into a PLDNN containing four partitions.
**Results.** Regarding performance, i.e., the cumulative reward, the
directly trained PLDNN can reach 52.97, which is similar to that of the
canonically trained DNN (52.45). We also observe that there is a slight
decrease in the system performance after the DNN is distilled to a PLDNN
(51.25).
Regarding verification, both the distilled PLDNN and the directly
trained PLDNN can be verified faster than the DNNs, which demonstrates
PLDNN's verification-friendliness. Moreover, the verification result of
the directly trained PLDNN is more precise, i.e., tightly closer to the
simulation results. Both the distilled PLDNN and original DNN result in
larger overestimation due to the over-approximation.
**Conclusion.** The preliminary experimental results show that, compared
to our transform-then-train approach, the train-then-transform approach
can achieve similar verification performance, but may cause a
performance decrease and extra overestimation of verification results.
Nevertheless, our comparison is preliminary; more comprehensive
experiments are therefore needed to draw a fair conclusion.
[1] X. Chen, E. ́Abrah ́am, and S. Sankaranarayanan, “Flow*: An analyzer for non-
linear hybrid systems,” in Computer Aided Verification: 25th International Confer-
ence, CAV 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings 25. Springer,
2013, pp. 258–263.
Pdf: /pdf/0b3019b261a19904a5fa17a9f46d231de8792259.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Macro Placement by Wire-Mask-Guided Black-Box Optimization | Accept (poster) | Summary: This paper proposes a new black-box optimization framework, called WireMask-BBO, for macro placement, which is an important problem in the electronic design automation (EDA) community. By using different black-box optimization algorithms, The experiments show it can achieve improvements (shorter half-perimeter wirelength (HPWL)) over previous methods.
Strengths: The general framework WireMask-BBO proposed by this paper somewhat provides new angle for macro placement problem. The paper is well-written and it is easy to follow.
Weaknesses: The scalability of the proposed method is doubtful because Bayesian optimization and evolutionary algorithm may not be suitable for large-scale problems.
The experiments are insufficient. The experiments don’t compare with the state-of-the-art macro placement method. The experiments don’t compare the runtime of the proposed method with other methods.
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: (1) How about the performance comparison with the state-of-the-art macro placement method in the experiments?
(2) How about the runtime of the proposed method and compare it with other methods?
(3) Is it necessary to tune hyperparameters extensively to achieve good results in the experiments?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 2 fair
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions. Below please find our response.
### Q1 Runtime comparison; scalability of WireMask-BBO; BO and EA not suitable for large-scale problems?
In our experiments, both the packing-based method SP-SA and our proposed WireMask-EA run for 1000 minutes. The results in Table 1 in the paper show that WireMask-EA achieves much better HPWL than SP-SA, implying that WireMask-EA is more efficient. The three analytical methods, NTUPlace3, RePlace and DREAMPlace, are very efficient, because the original complex problem is solved by relaxing to a mathematical programming problem. Compared with the state-of-the-art RL-based method MaskPlace, the results in Figure 4 in the paper have shown that WireMask-EA is much more efficient, which takes an average of 8 minutes to surpass the 200-minutes converged result of MaskPlace across six benchmark chips. Thanks to the suggestion of Reviewers 3gz2 and nUfA, we have compared the ChiPFormer method [1], recently published at ICML 2023. The results are shown in Table 1 of the accompanying PDF file. We can observe that after using the same number of evaluations, WireMask-EA outperforms ChiPFormer clearly on 9 out of 10 circuits. That is, WireMask-EA is more efficient than ChiPFormer. Thanks to your suggestion, we have implemented the AutoDMP method, recently published at ISPD 2023. The results in Figure 1 of the accompanying PDF file show that after using the same runtime, WireMask-EA achieves better HPWL than AutoDMP.
Based on the above comparison, we can find that WireMask-EA has better scalability than previous methods except analytical ones. This can also be validated by the chip scales used in the experiments of these works. For example, only 3 data sets in ChiPFormer [1] contain more than 1000 macros (1329, 1293 and 1024, respectively); only two data sets in DeepPR+ [2] have over 1000 macros (1309 and 1227, respectively). When facing large data sets like bigblue4 with 8170 macros, these two works selected only hundreds of macros manually for placement. In the experiments of AutoDMP [3], only at most 320 macros are to be placed. Note that in our work, we have run the proposed framework WireMask-BBO on the full bigblue4 benchmark with 8170 macros, which is the largest scale reported so far, to the best of our knowledge.
However, it must be acknowledged that our framework WireMask-BBO still struggle with very large-scale data sets, e.g., bigblue2 with 23084 macros. As WireMask-BBO can be equipped with any black-box optimization (BBO) algorithm, one direct way is to employ efficient Bayesian Optimization (BO) or Evolutionary Algorithm (EA) for high-dimensional scenario. Note that we only employed simple BBO techniques in the paper, which has led to superior performance over previous methods. Though BO and EA are traditionally inefficient for high-dimensional problems, many efforts have been put into this topic, generating many efficient BBO algorithms, e.g., RDHEBO [4] based on decomposition [4], ALEBO based on embbeding [5], MCTS-VS-BO based on variable selection [6], and self-guided evolutionary strategy [7]. We will revise to include the runtime comparison and add more discussion. Thank you.
### Q2 Comparison with AutoDMP [3].
Thank you for pointing out this related method AutoDMP, which is recently published at ISPD'23. It is built mainly upon the efficient DREAMPlace, using BO to explore the configure space and showing potential in real EDA application. We have revised to add the comparison with AutoDMP. Due to time and computational resource limit, we were only able to run on the two chips, adaptec1 and biglue1. The results are shown in Figure 1 of the PDF file. We can observe that WireMask-EA outperforms AutoDMP. We will run AutoDMP on all the tasks, and include it in the final version. Thank you for your suggestion.
### Q3 Is it necessary to tune hyperparameters extensively to achieve good results in the experiments?
We did not tune hyperparameters extensively. When applying the general framework WireMask-BBO, there are only two hyperparameters to be set, i.e., the number of partitions of the chip canvas and the employed BBO algorithm. In our experiments, the number of partitions is heuristically determined based on detailed macro statistics and varies across different chips, as shown in Table 4 of Appendix. Thanks to your suggestion, we have run the proposed WireMask-EA with 224 partitions for all the chips, which is consistent with the setting of MaskPlace. The results are shown in Table 3 of the PDF file. We can observe that WireMask-EA is still always better than MaskPlace and ChiPFormer, implying its robustness to the number of partitions of the chip canvas. Regarding the employed BBO algorithm, our experiments have shown that employing random search, a simple EA or the BO algorithm TurBO can all lead to superior performance over previous methods, implying the robustness of WireMask-BBO to the employed BBO algorithm. Note that for each employed BBO algorithm itself, we used its default hyperparameters for all chips. Thus, we believe our proposed framework WireMask-BBO is easy to use in practice. We hope our explanation can address your concerns. Thank you.
References
[1] ChiPFormer: Transferable Chip Placement via Offline Decision Transformer. ICML'23.
[2] The Policy-Gradient Placement and Generative Routing Neural Networks for Chip Design. NeurIPS'22.
[3] AutoDMP: Automated DREAMPlace-based Macro Placement. ISPD'23.
[4] Are Random Decompositions all we need in High Dimensional Bayesian Optimisation? ICML'23.
[5] Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization. NeurIPS'20.
[6] Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization. NeurIPS'22.
[7] Self-Guided Evolution Strategies with Historical Estimated Gradients. IJCAI'20.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification. VLSI placement is considered challenging and difficult due to the complex problem structure and the large scale (and will always be). So if a solution cannot scale, then there must be other points that are extremely attractive to designers to make it acceptable, which is not significantly identified currently.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback. However, we are confused about the concerns you expressed in the reply and are not sure if your concerns have been addressed. As to the weaknesses and questions you commented in the initial review, we believe that our response has addressed them. To be specific, we list your comments and a brief summary of our response below.
- Comparison with the recent method AutoDMP (published at ISPD'23). We have conducted additional experiments to compare our WireMask-EA with AutoDMP you mentioned. Besides, we have compared WireMask-EA with another SOTA method ChiPFormer (published at ICML'23) as suggested by Reviewer 3gz2 and Reviewer nUfA. Experimental results demonstrate the superior performance of our proposed WireMask-EA. Note that these two SOTA methods are considered concurrent works according to the NeurIPS rule and ChiPFormer is even not released when we submitted this work.
- Hyperparameter tuning. We introduced all the hyperparameters used in the paper. We did not tune hyperparameters extensively, and a common setting has been proven to show remarkable performance. We have added more detailed discussions on the settings of hyperparameters.
- Runtime analysis and scalability. We have provided detailed runtime analysis and comparison with related methods, including two SOTA methods AutoDMP and ChiPFormer, to reveal our runtime efficiency. For the scalability of our proposed WireMask-BBO, we have shown that WireMask-BBO performs better than other related works on large-scale problems, demonstrating the better scalability of our method.
We fully agree with you that the scalability of the VLSI design method is indeed a significant challenge, which is, however, hardly to be fully addressed by a single work. Solving it step by step is more realistic from the perspective of scientific research. We believe our proposed WireMask-BBO has brought significant improvements to the existing SOTA methods, as shown in our experiments. Besides, WireMask-BBO provides a new perspective, i.e., solving VLSI by black-box optimization, which can provide more insight into the design of the VLSI method and can benefit from the progress of high-dimensional black-box optimization algorithms. Thus, we think our work has enough contributions to the community: the proposed WireMask-BBO not only achieves SOTA performance and can be used to fine-tune existing placements, but also has the potential to be a new viable direction for macro placement and promote further advances.
**We hope that our response has addressed your concerns, but if we missed anything please let us know.**
---
Reply to Comment 1.1.2:
Title: Add further experiments to address your concerns.
Comment: Our proposed WireMask-BBO is a general framework for macro placement, which can be equipped with any black-box optimization (BBO) algorithm. Our experiments show that even employing simple BBO algorithms has led to superior performance over previous methods. The runtime analysis and comparison have shown that WireMask-BBO is more scalable than recent methods, which can also be validated by the largest chip scales used in the experiments of different works, e.g., 8170 marcos by WireMask-BBO vs. 1329 macros by ChiPFormer [1].
As we claimed before, its versatility allows WireMask-BBO to benefit from the progress of high-dimensional BBO algorithms, e.g., efficient BBO algorithms for high-dimensional scenario can be employed to further improve the efficiency of WireMask-BBO. To show this, we select DropoutBO [2] arbitrarily, which is a Bayesian optimization (BO) algorithm for high-dimensional scenario based on random variable selection; and we have revised to implement WireMask-BBO equipped with DropoutBO on the benchmarks adaptec4 (with 1329 macros) and bigblue3 (with 1298 macros). The detailed results are shown in the following table, giving the HPWL value achieved every 50 search steps. We can clearly observe that compared with the BO algorithm TurBO [3] used in the paper, using DropoutBO can lead to significant improvement. Thus, we believe that the proposed general framework WireMask-BBO is scalable, and can bring a new viable direction for solving the important macro placement problem. We hope our further experiments and explanation can address your concerns. Thank you.
| method | dataset | 50 | 100 | 150 | 200 | 250 | 300 | 350 | 400 | 450 | 500 |
|------------|----------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| BO | adaptec4 | 68.28 ± 1.31 | 67.76 ± 1.36 | 67.38 ± 1.4 | 66.38 ± 0.52 | 65.75 ± 0.46 | 65.23 ± 0.89 | 64.78 ± 1.18 | 64.47 ± 1.02 | 64.36 ± 1.00 | 64.25 ± 1.07 |
| DropoutBO | adaptec4 | 65.77 ± 1.65 | 65.3 ± 1.26 | 63.7 ± 0.45 | 63.25 ± 0.77 | 62.88 ± 1.03 | 61.5 ± 0.75 | 61.21 ± 0.91 | 61.18 ± 0.94 | 60.52 ± 0.97 | 60.08 ± 1.59 |
| BO | bigblue3 | 72.72 ± 3.27 | 71.9 ± 4.01 | 69.43 ± 5.97 | 69.31 ± 5.99 | 69.05 ± 6.17 | 68.7 ± 6.59 | 68.66 ± 6.59 | 68.61 ± 6.58 | 68.07 ± 6.35 | 67.83 ± 6.27 |
| DropoutBO | bigblue3 | 69.35 ± 3.07 | 64.41 ± 2.88 | 61.61 ± 2.31 | 61.17 ± 1.87 | 61.07 ± 1.92 | 60.7 ± 1.4 | 60.15 ± 1.81 | 59.58 ± 1.04 | 59.53 ± 0.96 | 59.03 ± 1.03 |
References
[1] ChiPFormer: Transferable chip placement via offline decision transformer. ICML'23.
[2] High dimensional Bayesian optimization using dropout. IJCAI'17.
[3] Scalable global optimization via local Bayesian optimization. NeurIPS'19. | Summary: This paper presents a framework using BBO for macro placement in VLSI designs. Any placement solution for a set of macros can be optimized using the wire masks (presented in a prior work using RL) where the optimization goal is to minimize the HPWL of the output. In addition to random inputs, the framework can also be used for further enhancement of any existing solutions.
Strengths: The work tackles a critical problem in VLSI designs.
The idea of casting the placement problem to a BBO is interesting and novel.
This framework can be used to further improve any existing solutions for the macros. This can be used as another step in PnR with reasonable runtime.
Overall, the paper explains the problem, the existing solutions, and the proposed work clearly.
Weaknesses: The idea of casting the placement problem to a BBO is interesting and novel. However, as the authors admit in Section 1 that this work does not develop any new BBO algorithm. To the reviewer, the experimental results are not extensive and convincing; this is described in the limitation section.
The paper also states in the conclusion that it can only place macros but not standard cells without explanation, which limits the application of the proposed work to the actual VLSI problem. On the other hand, this framework can be used to further improve any existing solutions for the macros. This can be used as another step in PnR with reasonable runtime.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Why is EA-based framework better than the other two BBO algorithms?
Why cannot the proposed framework used for standard cell placement? Can't you use a more fine-grained canvas for the problem, where the standard cells are larger than the grid size?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The computational time for the proposed work seems to be high. Therefore, in the experimental section, the paper does not include the complex benchmark that has thousands of macros to be placed. This contradicts with the claim the paper that this approach is scalable.
The evaluation of the proposed method only uses 5 random seeds. It would be more convincing if the paper includes more experimental data.
The proposed framework cannot place the standard cells in the design. Therefore, the comparison between this work and the existing methods seems to be unfair.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. Below please find our response.
### Q1 Does not develop any new BBO algorithm.
We want to emphasize that our main contribution is introducing the general framework WireMask-BBO for macro placement, while not developing new BBO algorithms. WireMask-BBO can be equipped with any BBO algorithm. Our experiments show that even employing simple BBO techniques has led to superior performance over previous methods, suggesting that WireMask-BBO may be a new viable direction for macro placement. We agree that developing new efficient BBO algorithms under the framework WireMask-BBO is important for handling very large-scale circuits, as stated in our limitation part. But we believe that the contribution of proposing WireMask-BBO is significant enough. In fact, we plan to open-source WireMask-BBO and use it as an optimization benchmark to encourage the invention of more efficient BBO algorithms for solving macro placement problems, as well as broaden the application scenario of BBO.
### Q2 Why cannot be used for standard cell placement?
As you indicated, the proposed framework can be naturally applied to standard cell placement by using a fine-grained canvas where the number of grids is larger than that of standard cells. However, the number of standard cells can be million, resulting in a very large search space. Furthermore, the wire-mask-guided greedy procedure for objective evaluation will be very expensive, which requires the calculation of wire mask for each cell. A more fine-grained canvas will also increase the wire mask computation time. Thus, our proposed WireMask-BBO currently cannot deal with standard cells. In fact, the limitation is shared with those methods based on step-by-step placement formulation, including Graph Placement, DeepPR, MaskPlace and ChiPFormer. We will revise to add some discussion. Thank you.
### Q3 Why is EA better than the other two BBO algorithms?
RS relies solely on random sampling without leveraging any search history. BO performs well in many low-dimensional tasks (typically when the dimension $d \leq 20$ [1]), but suffers from the curse of dimensionality due to the time-consuming cost of updating the Gaussian process surrogate model and optimizing the acquisition function [2]. For example, in our experiments where $d$ (i.e., 2 times the number of macros) is always larger than 1000, EA can sample much more solutions (about 2--7 times) than BO during the 1000-minutes running. For the EA, we designed a specific mutation operator, which may also contribute its efficiency. We will revise to add more explanation. Thank you.
### Q4 Computational time high? Scalability?
Please refer to Q1 in the response to Reviewer n9zj due to space limitation.
### Q5 More random seeds?
Five random seeds are commonly used in previous works, e.g., MaskPlace and the recently proposed ChiPFormer [3]. However, we agree that using more random seeds would be better. Thanks to your suggestion, we have conducted additional experiments with 30 random seeds, testing WireMask-EA using 2000 evaluations on ibm01 and ibm02 benchmark. The obtained HPWL values (mean ± std.) are 2.48 ± 0.13 and 3.60 ± 0.11, respectively. When using 5 random seeds, they are 2.39 ± 0.07 and 3.56 ± 0.05, respectively. We believe that the slight difference will not affect the conclusions. For example, the results of the state-of-the-art method ChipFormer using 5 random seeds are 3.05 ± 0.01 and 4.24 ± 0.25, respectively, as shown in Table 1 of the PDF file; we can observe that they have clear gaps with the results of WireMask-EA. Due to time and computational resource limit, we were only able to run more random seeds on these two tasks. We will cover a wider range of tasks in the revised version. Thank you for your suggestion.
### Q6 The proposed framework cannot place the standard cells in the design. Therefore, the comparison between this work and the existing methods seems to be unfair.
The previous methods SP-SA, Graph Placement, DeepPR+, MaskPlace, ChiPFormer, and the proposed WireMask-BBO all concentrate on macro placement. When comparing the results of full placement, the same DREAMPlace is applied to the macro placement generated by each method for the subsequent standard cell placement. Thus, the comparison is fair. In fact, such a flow has also been adopted in previous works, e.g., MaskPlace and ChiPFormer. We will revise to make it clearer.
References
[1] A Tutorial on Bayesian Optimization. 2018.
[2] A Survey on High-dimensional Gaussian Process Modeling with Application to Bayesian Optimization. ACM TELO'22.
[3] ChiPFormer: Transferable Chip Placement via Offline Decision Transformer. ICML'23.
---
Rebuttal Comment 1.1:
Comment: The authors answered my questions. The proposed method opens a new research direction in VLSI placement with promising experimental results. However, this method cannot scale to the existing real-world VLSI designs, and the authors admitted that it will be hard to scale to the existing VLSI problems. It would be good to include a description in the paper of how to include this method in the existing VLSI design flows. I have raised my rating to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply.
Comment: Thanks for your reply! One direct way is to apply the proposed method for macro placement, and then use DREAMPlace for standard cell placement, which has been adopted in many works, e.g., DeepPR, MaskPlace and ChiPFormer. We will include a description in the final version. Thank you. | Summary: The authors propose a new placement method that is based on the black-box framework. The framework leverages the wire mask-guided information and can achieve significant placement results compared with the state-of-the-art methods.
Strengths: 1. The novel method is based on black-box optimization, which has not been implemented on the placement task before. Although many RL methods have been proposed, they are not very efficient enough. Black-box optimization might be a viable direction.
2. The wire mask as the guide for generating phenotype representation is also very novel. It can quickly render a suitable solution based on the initial representation, improving efficiency.
3. The experiments are very comprehensive and solid, showing that the performance of the proposed method can consistently outperform existing methods.
4. The paper writing is well-written and easy to understand.
5. The code is open-source, improving reproducibility.
Weaknesses: 1. The full placement cannot surpass the DREAMPlace, which means the proposed method can only work well in macro placement.
2. The reasons for the improvement in the congestion metric are not clear. The method does not consider the congestion metric in its search process.
3. The recent work [1] based on Maskplace should also be discussed in the related work part.
[1] Lai Y, Liu J, Tang Z, et al. Chipformer: Transferable chip placement via offline decision transformer.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Have you tested the effect of the number of initial states on the final results?
2. Why your method can get better congestion results when you do not consider it in your method.
3. The efficient comparison only provides the clock time. However, the number of search steps is also substantial. Could you provide how many steps you use for each circuit (or how much clock time is consumed to perform a step)?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable and positive comments. Below please find our response.
### Q1 The full placement cannot surpass DREAMPlace.
In Table 6 of Appendix B.2, though DREAMPlace achieves the best HPWL in 3 out of 7 tested chips, our proposed WireMask-EA has significant improvements in four chips, significant disadvantages in two chips, and no significant difference in one chip, according to the Wilcoxon rank-sum test with significance level 0.05. This indicates an overall superiority of our proposed method. However, we cannot guarantee consistent superiority over DREAMPlace in all data sets. This is because for the full placement task, WireMask-EA first conducts macro placement and then uses DREAMPlace for standard cell placement, while DREAMPlace considers the placement of macros and standard cells simultaneously, which may lead to favorable results. One of our future work is to consider standard cell placement in our framework, as stated in the limitation part. Thank you.
### Q2 Why congestion better?
Please refer to Q2 in general response.
### Q3 Discussion on ChiPFormer [1].
Please refer to Q1 in general response.
### Q4 Influence of the number of initial states?
The Evolutionary Algorithm (EA) adopted in our paper starts from one single initial solution, and iteratively improves it by mutation and selection. The initial solution is generated by selecting the best from a pool of 100 random solutions. We guess that you may wonder the influence of the pool size here. Thanks to your suggestion, we have tested WireMask-EA on the chip adaptec1 using the pool size of 1, 10, 20, 50, 200, and 2000. For each run of WireMask-EA, the total number of evaluations used is set to 2000, which has included the number of evaluations (i.e., the pool size) for initialization. Note that WireMask-EA with the pool size of 2000 is just WireMask-RS which performs random search. The final HPWL values are shown below. We can observe that as the pool size increases gradually, which will lead to a better initial solution, WireMask-EA achieves better (smaller) HPWL. However, when the pool size is large enough (e.g., 200 here), the performance of WireMask-EA starts to degrade, which is expected because too many evaluations are used for initialization (which performs random search) and the following exploration by EA tends to be insufficient. We used the pool size 100 in our experiments. Meanwhile, we can observe that WireMask-EA with any pool size can surpass the state-of-the-art method ChiPFormer, which achieves the HPWL value 6.62 ± 0.05 after using 2000 evaluations, as shown in the first line of Table 1 of the accompanying PDF file.
| Pool size for initialization | 1 | 10 | 20 | 50 | 100 | 200 | 2000 |
| --- | --- | --- | --- | --- | --- | --- | --- |
| adaptec1 | 6.50 ± 0.34 | 6.22 ± 0.10 | 6.14 ± 0.06 | 6.11 ± 0.09 | 5.96 ± 0.08 | 5.98 ± 0.06 | 6.13 ± 0.05 |
### Q5 How many steps used for each circuit?
Thanks for indicating this issue. In the experiments, our method is run for 1000 minutes on each chip. The number of search steps used for each chip is shown below, which will be included in the revised version. Thanks to your suggestion in Q3, we have compared WireMask-EA with ChipFormer under the same number of search steps, as shown in Table 1 of PDF file. We can observe that using 1, 300 or 2k steps, WireMask-EA outperforms ChipFormer in 9 out of 10 chips. We will revise to add some discussion. Thank you.
| 1000 min | adaptec1 | adaptec2 | adaptec3 | adaptec4 | bigblue1 | bigblue3 | bigblue4 |
| ----------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| WireMask-RS | 3610 | 3689 | 5257 | 2791 | 7367 | 976 | 106 |
| WireMask-BO | 820 | 655 | 810 | 600 | 1580 | 540 | 59 |
| WireMask-EA | 3526 | 3540 | 5179 | 2685 | 7217 | 877 | 97 |
References
[1] ChiPFormer: Transferable Chip Placement via Offline Decision Transformer. ICML'23.
---
Rebuttal 2:
Comment: Thank you for your reply. The rebuttal addresses my concerns. I think this work is novel and experimental sufficiency. Although there is reviewer concern about scalability, from the research perspective, this work still has great potential with the black box optimization method. So, I still recommend this paper for acceptance.
---
Rebuttal Comment 2.1:
Title: Thanks for your appreciation.
Comment: Thanks for your appreciation. We are glad to hear that your concerns have been addressed. We will make sure to include the added results and discussion in the final version. Thank you. | Summary: This paper proposes a novel black-box optimization (BBO) framework, namely WireMask-BBO, for macro placement in chip design. Specifically, it devises a post-processing technique that legalizes any searched placement solution while optimizing the half-perimeter wirelength (HPWL). The post-processing technique allows us to perform BBO algorithms to search for solutions with better HPWL. Experiments demonstrate that WireMask-BBO outperforms previous state-of-the-art (SOTA) methods, achieving better HPWL performance in less time.
Strengths: 1. This paper explores BBO methods for macro placement, which may provide a new insight for the research community.
2. The proposed post-processing technique is simple yet effective. It also has a good versatility because it can be combined with other placement methods and BBO methods.
Weaknesses: 1. The propose of the post-processing is purely heuristic. The motivation is unclear and intuitive explanation for the advances is insufficient.
2. The proposed method only targets on optimizing HPWL, without explicitly considering other important metrics like routing wirelength or congestion. It does not consider cells and routing either. Moreover, the framework can be hardly transferred to tasks with those metrics under consideration, which limits its real application in EDA.
3. Because introducing BBO-based methods is one of the core contributions of this paper, the authors may want to illustrate the implementation of BBO algorithms for macro placement more detailly.
4. In Algorithm 1, the macros are ordered decreasingly according to areas, while in Figure 3, the smaller macro-A is considered first.
5. A recent related work [1] is missing.
[1] Lai Y, Liu J, Tang Z, et al. Chipformer: Transferable chip placement via offline decision transformer. ICML 2023.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Since only HPWL is considered as the objective, why does WireMask-BBO also outperforms MaskPlace in congestion?
2. What are the results of routing wirelengths?
3. Under this framework, how to take cell placement or some important yet computationally insufficient metrics into consideration?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Below please find our response.
### Q1 The proposed method is heuristic; insufficient intuitive explanations.
We want to clarify our motivation of designing the proposed framework. To efficiently improve the HPWL of a solution and guarantee non-overlapping, we first design a greedy procedure guided by wire mask, which sequentially adjusts each macro of the solution to a position with the minimum marginal increment on HPWL. The adjustment order of each macro is determined by the area of all the cells connected with it, and all macros will be adjusted sequentially in the decreasing order of their corresponding computed areas, which is because a macro with a larger computed area implies connecting with more large cells and thus is intuitively more important. In the adjustment of each macro, if the position with the minimum marginal increment on HPWL is not unique, the one closest to its original position is selected, which can utilize some global information of the original placement to reduce the risk of getting trapped in local optima due to the greedy nature. Based on this greedy procedure, we further apply BBO algorithms for exploration, to search for solutions with better HPWL. We believe our proposed framework is intuitive and reasonable. We will add more explanations in the paper. Thank you.
Q2 Optimize HPWL of macros only; how to take cell placement or more metrics into consideration?
Thanks for your question, which is indeed a challenge faced by the area of macro placement. Following previous works (Graph Placement, DeepPR and ChiPFormer), we selected HPWL as the objective to be optimized, because it is a good approximation for wirelength and is also efficient to compute. Meanwhile, more metrics can be considered in our proposed BBO framework by directly treating them as the optimization objective. But there are two issues to be tackled before application. Firstly, as you indicated, the metric may involve cell placement and can be computationally expensive. To address this issue, one possible way is to employ advanced BBO techniques such as SAASBO [1] and Guided-ES [2], which can efficiently optimize expensive functions. Note that our framework is general and can be equipped with any BBO algorithm. For example, we only employed simple BBO techniques in the paper, which, however, has led to superior performance over previous methods. Secondly, our greedy procedure requires a criterion (e.g., HPWL used in the paper) which can guide the improvement of a solution and can also be computed step by step. For this issue, we may design a surrogate metric to approximate the true one, e.g., we can use RUDY to approximate the congestion. We will revise to add some discussion, and treat it as an important future work to explore. Thank you.
### Q3 More details of the implementation of BBO algorithms.
WireMask-BBO employs a wire-mask-guided greedy adjustment procedure (generating feasible solutions with high quality efficiently), to serve as a black-box function, and adopts RS, BO and EA for optimization. To be specific, RS generates solutions by allocating all macros' positions randomly, and records the historical best. BO establishes a surrogate model, and samples a new solution by optimizing an acquisition function based on the surrogate model. We used the specific BO, TuRBO, in the paper. EA maintains a population of solutions, and iteratively improves the population by recombination and mutation. In the paper, we used a simple EA which maintains only one solution and uses mutation only. We designed the mutation operator by randomly selecting two macros and exchanging their coordinates in a solution. We are sorry for not illustrating them detailly enough, and will add more discussions in the paper. Thank you for your suggestion.
### Q4 Macro-A considered first in Figure 3?
As described in line 188, Page 5, the placement order of a macro is determined by the sum of area of all its connected macros (including itself), instead of the macro's own area. In Figure 3, the macro A is connected to both B and C, which has the largest connected area, and thus is adjusted first. We will revise to make it clearer.
### Q5 Comparison with ChiPFormer.
Please refer to Q1 in general response.
### Q6 Why congestion better?
Please refer to Q2 in general response.
### Q7 Routing wirelengths?
Yes, routing wirelength is a critical metric, whose evaluation is, however, expensive because it must be evaluated after standard cell placement and routing, and the evaluation itself is time-consuming. This is the motivation that we use HPWL as a surrogate, as also did in many previous works. Thanks to your suggestion, we have tried to report the routing wirelengths.
For the routing task, our currently adopted benchmarks (i.e. ISPD05 and ICCAD04) are not supported by open source router OpenROAD or NCTU-GR 2.0. Thus, we turn to the DAC2012 benchmark [3] for routing wirelength evaluation with DREAMPlace for standard cell placement and NCTU-GR 2.0 for routing. For the proposed algorithm WireMask-EA, we first use it to generate a macro placement, and then adopt DREAMPlace for the subsequent standard cell placement. For DREAMPlace, we set all macros to be movable, and optimize macros and standard cells together. The full placement HPWL and the total wirelength results reported by NCTU-GR 2.0 are shown in Table 2 in the PDF. Due to time limitation, we select superblue19 with 286 macros for test. The results show that WireMask-EA achieves both better HPWL and routing wirelength. We will run the experiments on more benchmark chips from DAC2012, and add them into the final version. Thank you.
References
[1] High-Dimensional Bayesian Optimization with Sparse Axis-Aligned Subspaces. UAI'21.
[2] Guided Evolutionary Strategies: Augmenting Random Search with Surrogate Gradients. ICML'19.
[3] The DAC 2012 Routability-Driven Placement Contest and Benchmark Suite. DAC'12.
---
Rebuttal Comment 1.1:
Title: Thank you. Increase my score to 6.
Comment: I appreciate the authors' efforts in responding to my questions. I hope the authors will revise the paper accordingly. Though there are still some limitaions as we have discussed, I think the proposed framework will benefit the research community, and it deserves further study.
I have raised my score from 5 to 6.
---
Reply to Comment 1.1.1:
Title: Thank you! We are working on revising the paper.
Comment: Thanks for your reply! We will make sure to revise the paper according to all reviewers’ comments and suggestions, and incorporate the added results in our revision. | Rebuttal 1:
Rebuttal: ## General response
We are very grateful to the reviewers for carefully reviewing our paper and providing constructive comments and suggestions. We have revised the paper carefully according to the comments and suggestions, but we cannot upload the paper due to the NeurIPS' rule this year. Our response to individual reviewers can be found in the personal replies, but we also would like to make a brief summary of revisions about experiments for your convenience.
- We add the comparison with two important state-of-the-art methods, ChiPFormer [1] and AutoDMP [2].
- We add more benchmarks to test different methods, i.e., ibm series from ICCAD'04 benchmark [3].
- We add some results of routing wirelength.
- We add the analysis of the initial pool size and the partition number of the canvas for WireMask-EA.
- We add more random seeds (from 5 to 30) on some benchmarks.
- We add the details of the number of search steps during the 1000-minutes optimization of the proposed WireMask-BBO.
Below are responses to some common questions.
### Q1 Why not consider ChiPFormer [1]?
ChiPFormer, a recent paper published at ICML 2023 and initially released on arXiv on June 26, 2023, has gained our attention when it was published. However, due to its release date falling after the NeurIPS 2023 submission deadline, we were unable to include it in our manuscript.
ChiPFormer adopts an offline RL method, focusing on the HPWL metric of macro placement during optimization, and is equipped with a mixed-size placement workflow. The method demonstrates remarkable performance on various chip placement tasks. We acknowledge that incorporating a comparison with ChiPFormer would enhance the comprehensiveness of our work.
To ensure a fair evaluation, we promptly reached out to the authors of ChiPFormer upon reading their paper and obtained a standardized processed dataset (named ibm01--ibm04). We compared our proposed WireMask-EA with ChiPFormer on ten chips, as outlined in Table 1 of the accompanying PDF file. Notably, WireMask-EA outperforms ChiPFormer clearly on 9 out of 10 circuits, regardless of the number of evaluations employed (1, 300, or 2k).
We will discuss ChiPFormer in our paper, as well as include experimental comparisons. Thank you for your valuable feedback.
### Q2 Why congestion is better when only HPWL is optimized?
We feel sorry that we did not provide a persuasive explanation for such an important scenario in our paper, but we try to explain it here. We choose the widely adopted RUDY (Rectangular Uniform wire DensitY) to approximate congestion. Our conclusion is that the RUDY approximation of congestion is sometimes positively related to the HPWL metric, by analyzing the computation of HPWL and RUDY. Given a macro placement solution, the HPWL is computed as the sum of the rectangle's half-perimeter of each net (hyper-edge), i.e., $\sum_{e_j \in E} (w_j+h_j)$, where $e_j$ denotes a net, $E$ denotes the hyper-graph comprised of all nets, $w_j$ and $h_j$ denote the width and height of the rectangle corresponding to $e_j$, respectively. The RUDY measures the overall congestion on the canvas, and the congestion of each grid $g_i$ on the canvas is calculated by the cumulative impact of all nets encompassing the grid. Note that a net $e_j$ will add an impact to each of its covered grids by $\frac{1}{w_j} + \frac{1}{h_j}$. Then, the overall congestion of all grids is $\sum_{g_i} \sum_{e_j \in E(g_i)} \frac{1}{w_j} + \frac{1}{h_j}=\sum_{e_j \in E} w_j\cdot h_j\cdot (\frac{1}{w_j} + \frac{1}{h_j})=\sum_{e_j \in E} (w_j+h_j)=\mathrm{HPWL}$, where $E(g_i)$ denotes the set of nets whose corresponding rectangle covers the grid $g_i$. Note that the first equality holds because the number of times of a net $e_j \in E$ enumerated in LHS is equal to the number of grids covered by it, which is $w_j \cdot h_j$. Thus, we can observe a positive relation between RUDY and HPWL. Besides our empirical results, Table 4 in MaskPlace [4] and Table 4 in ChiPFormer [1] have also shown that the best HPWL can lead to the best congestion. However, we should also note that a lower HPWL does not necessarily lead to a lower RUDY, because RUDY only considers top-10\% congested grids. We will revise to add more discussion. Thank you.
References
[1] ChiPFormer: Transferable Chip Placement via Offline Decision Transformer. ICML'23.
[2] AutoDMP: Automated DREAMPlace-based Macro Placement. ISPD'23.
[3] ICCAD’04 Mixed-size Placement Benchmarks. 2009.
[4] MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning. NeurIPS'22.
Pdf: /pdf/5aea30a0da4f16736a6be3cf2f667c6e02b10228.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Computing Approximate $\ell_p$ Sensitivities | Accept (poster) | Summary: The authors propose randomized algorithms to approximate $\ell_p$ sensitivity functions for $p \in [1,\infty)$, which extend leverage scores beyond the $\ell_2$ norm. The functions they consider are: 1. Estimating all sensitivities, 2. Estimating total sensitivity, and 3. Estimating maximum sensitivity. They provide different types of approximations for each task.
For task 1, they give an additive error, constant factor approximation. For task 2, they give a relative error, $(1+O(\gamma))$ approximation ($\gamma \in (0,1)$). And for task 3, they give a constant factor relative error approximation.
They demonstrate their algorithm initially for $\ell_1$ and then generalize it to all $\ell_p$ norms with $p > 1$. They also prove a hardness result by reducing $\ell_p$ regression to $\ell_p$ sensitivity estimation. Additionally, they implement their algorithm to estimate all sensitivities on 2 existing datasets and compare the average and maximum approximation ratios with the theoretical results.
The main techniques used in the algorithms involve hashing using Rademacher combinations, computing sensitivities with respect to subspace embeddings, splitting matrix rows based on leverage score intervals, and utilizing existing results for $\ell_\infty$ subspace embeddings.
Strengths: The problem of sensitivity estimation is useful for regression problems and has received limited attention for general $p$. Approximating total sensitivity is particularly significant in obtaining the sample complexity of learning arbitrary functions.
The algorithms presented are an interesting combination of known results from sensitivity sampling framework and RNLA. Specifically, their result for approximating total sensitivity is interesting because the computational complexity does not depend polynomially on the number of rows of the matrix, which is usually very large.
Finally, the analysis of the proposed algorithms is largely clear, helping to understand the guarantees of their algorithms. Overall, this paper is a significant contribution to the field of sensitivity sampling and dimensionality reduction.
Weaknesses:
- Algorithm 4 does not appear to be significantly novel, except for integrating generalized sensitivity to an existing $\ell_\infty$ subspace embedding technique, but the task is coherent with the other tasks mentioned.
- While the motivation for total sensitivity is clear, the authors have not motivated the problem of estimating maximum sensitivity. However, this might be because I am not familiar with it.
- I noticed some typos, missing definitions.. In Algorithm 3, it is unclear where the vector of leverage scores $\tau(C)$ is being used. Additionally, $\omega$ is not defined, which I am assuming to be the matrix multiplication exponent.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I would like to understand the comparison between approximate $\ell_p$ sensitivities and Lewis weights. From what I understand, it is known that Lewis weights cover sensitivities, at least for $p \in [1,2]$. The authors mentioned that Lewis weights are a crude approximation to sensitivities, although Lewis weights are used as a subroutine to obtain subspace embedding in their proposed algorithms. I am curious to see the benefits that an additive error approximation to sensitivities have over Lewis weight sampling.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: It would be beneficial if the authors consider adding a section addressing the open problems and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for their time, effort, and feedback. We are encouraged that they found the problem we study important and our algorithm and techniques novel. We are also very grateful for all the weaknesses and typos pointed out and questions raised, and we will clarify all these points in our manuscript.
-----------------------
### “Usefulness of the maximum sensitivity”. ###
The maximum sensitivity **captures the importance of the most important datapoint** and finds applications in, e.g., experiment design to detect the most important features and in reweighting matrices for low coherence [4]. Additionally, it **captures the maximum extent to which a datapoint can influence the objective function**, thus finding applications in differential privacy [1].
---------------------------
### “Where is $\tau(C)$ used in Algorithm 3?”. ###
The leverage scores of $M$ (which is just a submatrix of $C$) are used critically in Lines 10-14 to split the rows into buckets. This step is actually quite subtle: The sensitivity of the $j^{th}$ row of $M$ with respect to $C$, i.e., $\sigma_1^{C}(M[j])$, is multiplicatively approximately the sensitivity of that row with respect to $SA$, which in turn is multiplicatively approximately the sensitivity of that row with respect to $A$. Further, the sensitivity of $M[j]$ with respect to $C$ can be sandwiched between appropriately scaled leverage scores of $C$. Therefore, we combine these two facts to obtain upper and lower bounds on the sensitivity of that row with respect to $A$. We formalize this notion in Lines 685-699 of the Supplementary material but will clarify the role of $\tau(C)$ better in the main text.
--------------------------
### “Using additive approximate sensitivities instead of Lewis weights”. ###
For the case $p > 2$, by using Theorem $1.5$ of [3], we have that $$\text{the sample complexity with approximate sensitivities} = O\left(\alpha^{2p} \cdot \mathfrak{S}_p^{2-2/p}(A)\right),$$ as opposed to $$\text{the sample complexity with true sensitivities} = O\left(\mathfrak{S}_p^{2-2/p}(A)\right),$$ and $$\text{the sample complexity with Lewis weights} = O\left(d^{p/2}\right).$$ Assume that $p>2$ is large and the total sensitivity $\mathfrak{S}_p(A)$ is small (say, $\mathfrak{S}_p(A) = d$). Further assume, as a toy example, $n = d^{10}$ and $\alpha = n^{\frac{1}{10 p}} = d^{\frac{1}{p}}$. Then, our approximate sensitivities give a sample complexity of $O(d^4)$, true sensitivities give a sample complexity of $O(d^2)$, and Lewis weights sampling gives $O(d^{p/2})$. Thus, our approximate sensitivities preserve the regression approximation guarantee, while **increasing the total sample complexity by only a small $\text{poly}(d)$ compared to that with true sensitivities (while still being much smaller than that with Lewis weights) and incurring a much smaller cost.** A detailed discussion of the connection of Lewis weights to well-conditioned bases can be found in Section $3.3$ of [4].
-----------------
### Conclusion ###
We once again thank the reviewer for their time, effort, and very thoughtful questions. We are happy to provide any further clarification required.
------------------
### References ###
[1] “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork and Aaron Roth, Foundations and Trends in Theoretical Computer Science 2014
[2] “Uniform Sampling for Matrix Approximation”, Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford, ITCS 2015
[3] “Sharper Bounds for $\ell_p$ Sensitivity Sampling”, David P. Woodruff and Taisuke Yasuda, ICML 2023
[4] "Dimensionality Reduction for Tukey Regression", Kenneth L. Clarkson, Ruosong Wang, and David P. Woodruff, ICML 2019, https://arxiv.org/abs/1905.05376
---
Rebuttal Comment 1.1:
Comment: Thanks for such a well organized rebuttal. Your comments have clarified my questions. | Summary: The paper gives algorithms to compute, approximately, the individual sensitivity scores, total sensitivity and maximum sensitivity, for $\ell_p $ norms. The approximation for the individual sensitivity scores is additive while a relative error approximation is obtained for both total and maximum sensitivity. Calculating exact sensitivity scores often being a computationally expensive problem, the authors provide faster algorithms that use techniques of multiplication with Rademacher vectors and hashing. The authors also empirically validate their results on real datasets.
Strengths: Strengths of the paper
1) Methods of constructing coresets for various ML problems rely heavily on good approximation to sensitivity scores. As such the paper is important and will be of interest to the community.
2) Empirical Results for various values of $p$ which are not usually seen in literature.
Weaknesses: To me the main weakness appears to be with the writing and clarity of the paper (may be because of space constraints) and also to an extent comparison and discussion with some related works. Here I list out some of the questions/ suggestions that I have:
1) The authors mention Lewis weights as crude over approximation to sensitivity scores for $\ell_p$. However, another way to sample rows for $\ell_p$ subspace embeddings is using the row norms of a well-conditioned basis. The authors have not discussed much from the area. It would be interesting to compare if for some well-conditioned basis, the row norms correspond to some approximation of sensitivity scores.
2) In all the algorithms the authors rely on constructing SA which is an $\ell_1$ subspace embedding. There is good amount of work on $\ell_1$ subspace embeddings using 1-stable Cauchy random variables, exponential random variables etc. It has not been compared with. It would be useful to give a table that compares these methods in terms of time, approximation factors and no. of rows required etc. also how is SA calculated in this work and what is the time taken for it? Please clarify.
3) Also, algorithm 2 requires computation of leverage scores of A. Are they calculated exactly in which case the time required will be $O(nd^2)$ and if they are also approximated, how does the approximation factor figure in your guarantees?
4) The implications of the lower bound are not clarified. Please elaborate.
5) It would be useful to give some motivation as to why maximum of the sensitivity scores is important.
Overall, the paper uses some known techniques in Randomized numerical linear algebra literature to calculate the approximations to $\ell_p$ sensitivities. However improved writing, better clarification of exact contributions and comparison with existing literature will strengthen it.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please see weaknesses section.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: Please see weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are extremely thankful to the reviewer for raising a very thorough set of questions and comments and will incorporate the clarifications for each of them in our manuscript. We are encouraged by the reviewer’s assessment of our work as important and interesting for the community. We address below the reviewer's questions and concerns.
------------------------
### How is $SA$ computed? ###
We use the results of Cohen-Peng [2] to construct subspace embeddings. These incur a cost of only $O(\text{nnz}(A))$, which is the best possible cost (since that is the cost of merely reading the input data matrix), and so we do not compare with other methods. However, we acknowledge that there exist many methods for this task (such as the ones noted by the reviewer) and will incorporate them in our "Related Work" section to provide a more complete picture of the landscape on subspace embeddings.
----------------------
### Calculation of leverage scores. ###
As was shown in [1], leverage scores can be computed up to a constant factor approximation in time $O(\text{nnz}(A) + d^{\omega})$ (we are ignoring polylogarithmic factors here). This is the accuracy to which we compute leverage scores in our submission; indeed, since the error accumulates multiplicatively across different steps of the algorithm, a high-accuracy algorithm does not serve any benefit over this constant-accuracy one.
-------------------
### “Implication of lower bound”. ###
Our hardness results imply that in general one cannot compute all sensitivities as quickly as leverage scores unless there is a major breakthrough in regression. We will make this point clearer in our write-up.
---------------------
### “Usefulness of the maximum sensitivity”. ###
As detailed in our top-level response, computing the maximum sensitivity is useful in applications ranging from experiment design to differential privacy [3] to more generic pre-processing tasks such as reducing the coherence of the matrix, which was studied for $p=2$ in [4].
-------------------------------
### “Row norms of a well-conditioned basis” ###
It is indeed true that the Lewis basis is a well-conditioned basis, but as was shown in [5], there are many advantages that sensitivity sampling offers over Lewis weight sampling, such as a much lower sample complexity for $p>2$ or when the total sensitivity is small. Additionally, while other well-conditioned bases exist and can be computed in $\text{poly}(d)$ time, they result in worse $\text{poly}(d)$ factor distortion, as was shown in, e.g., [4].
-------------------------
### Conclusion ###
We again thank the reviewer for their time and effort in bringing up these questions; we will incorporate these answers in our manuscript.
Please let us know if any questions remain, and if we answered all the questions, we’d like to respectfully request that the reviewer re-consider their score of our submission. We are happy to answer any further questions.
-------------------------
### References. ###
[1] “Uniform Sampling for Matrix Approximation”, Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford, ITCS 2015
[2] "$\ell_p$ Row Sampling by Lewis Weights", Michael B. Cohen and Richard Peng, STOC 2015
[3] “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork and Aaron Roth, Foundations and Trends in Theoretical Computer Science 2014
[4] "Dimensionality Reduction for Tukey Regression", Kenneth L. Clarkson, Ruosong Wang, and David P. Woodruff, ICML 2019
[5] "Sharper Bounds for $\ell_p$ Sensitivity Sampling", David P. Woodruff and Taisuke Yasuda, ICML 2023
---
Rebuttal 2:
Title: Replying to Rebuttal
Comment: Thanks for the response. It clears my doubts and I have raised my score | Summary: This paper proposes a randomized algorithm for efficiently approximating the $\ell_p$ sensitivities, with a constant approximation parameter guaranteed.
Strengths: This paper presents several novel randomized algorithms for approximating $\ell_p$ sensitivities and related statistics, based on two key ideas:
(1) Using subspace embeddings to efficiently approximate $\|Ax\|_p$ that avoids computing $Ax$ on the whole dataset
(2) Randomly hashing the dataset into small subsets and computing the sensitivities for each subset separately
Given the above, this work provides efficient approximations of the sensitivities of all data samples, the total sensitivity, and the maximum one. The resulting algorithms are good contributions to the problem of estimating sensitivities that fill in the blank of computing $\ell_p$ sensitivities efficiently.
The theoretical guarantees appear to be solid and correct, which is further verified by several experiments.
Weaknesses:
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I am a bit confused about the definition of $\boldsymbol{S}$ in line 191, where the size of $\boldsymbol{S}$ is $r$ by $d$, should it be $r$ by $n$? Also, in lines 189 and 193 it is said $\boldsymbol{S}$ to be 'diagonal' while this seems to contradict its definition, please let me know if I miss anything.
Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very grateful to the reviewer for their time spent reviewing our submission and are encouraged by their positive assessment of the motivation of our work, our theory, and our experiments.
---------------
In response to the reviewer’s question: Yes, thank you for pointing out the typo in Line 191; the size of $S$ should be $r \times n$. We apologize for the confusion about $S$: one can think of $S$ as being a sparse diagonal matrix, and in the resulting matrix $SA$, delete all the zero rows. We hope this clarifies the structure of $S$ and will incorporate these fixes into our manuscript.
---------------
We again thank the reviewer for their time and effort. Please let us know if there are any further questions that we could help clarify! | Summary: Given a matrix $A \in \mathbb{R}^{n \times d}$ and $p \in (0, \infty)$, the $l_p$ sensitivity of a vector $a \in \mathbb{R}^d$ with respect to $A$ is defined as $\sigma_p(a) := \max_x |a^\top x|^p / |Ax|_p^p$. It is known that by sampling each row $a_i$ of $A$ with probability proportional to (an upper bound on) $\sigma_p(a_i)$, we can obtain a coreset of $A$ with respect to the $l_p$ loss. Therefore, estimating $\sigma_p(a_i)$ quickly is desirable. This work presents three fast approximation algorithms for $l_p$ sensitivities. The main focus is on the case where $p = 1$, so I will state the results specifically for that case:
- An algorithm that provides an estimate $\tilde{\sigma}$ such that $\sigma_1(a_i) \leq \tilde{\sigma} \leq \sigma_1(a_i) + \alpha/n \mathfrak{S}_1(A)$, where $\mathfrak{S}_1(A)$ is the sum of $l_1$ sensitivities. The algorithm's running time is $O(n/\alpha (nnz(A) + d^\omega) + n)$, where $\omega$ is the matrix multiplication exponent.
- An algorithm that provides an estimate $\tilde{\sigma}$ such that $\mathfrak{S}_1(A) \leq \tilde{\sigma} \leq (1 + \gamma)\mathfrak{S}_1(A)$. The running time of this algorithm is $O(\sqrt{d} (nnz(A) + d^\omega))$, significantly faster than the naive bound of $O(n (nnz(A) + d^\omega))$.
- An algorithm that provides an estimate $\tilde{\sigma}$ such that $\max_i \sigma_1(a_i) \leq \tilde{\sigma} \leq \sqrt{d} \max_i \sigma_1(a_i)$. The running time for this algorithm is $O(d(nnz(A) + d^\omega))$, also faster than the naive bound.
The authors showed that $l_p$ regression reduces to $l_p$ sensitivity calculation, implying that designing a fast algorithm for $l_p$ sensitivity requires a fast algorithm for $l_p$ regression. Experimental results are provided to validate the theoretical bounds.
Strengths: The algorithm effectively utilizes $l_p$ subspace embedding to enhance its speed. Additionally, the recursive approach employed for computing total sensitivity is intriguing.
Weaknesses: - There are reservations about the usefulness of Theorem 1.2. It is possible to estimate sensitivities in $O(n (nnz(A) + d^\omega))$ time, as indicated in Fact 2.4. This means that, to achieve a significant speedup, $\alpha$ needs to be a function of $n$, which then results in a substantial additive error.
- The usefulness of Theorem 1.3 is unclear. Although the sum provides a bound on the coreset's size, it is not evident when one would solely desire the bound without the coreset itself.
- Similarly, the usefulness of Theorem 1.4 is not clear.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: - This is just a comment, but "additive error of $O(\alpha^p)$" should be "additive error of $\alpha^p/n \cdot \mathfrak{S}_p(A)$".
- What is the meaning of the "$\rho$-factor subspace embedding" in Algorithm 2? Should it be "$\rho$-approximate subspace embedding"?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are not clearly mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply grateful to the reviewer for their careful reading of our manuscript and for raising very thoughtful questions, which we respond to below.
-----------------
### ”Usefulness of Theorem 1.2”: Runtime ###
**We would like to clarify that our actual runtime is significantly better than what we (mistakenly) first stated in the main submission.** Our sub-optimal runtimes existed only in our theorems, and the algorithms and analyses originally presented indeed give the correct runtimes (e.g., cf. Appendix of the Supplementary file, Lines 551 - 575). Specifically, our correct main result (for the case $p=1$, cf. Lines 73-74 of the Supplementary Material) is that we can approximate all $\ell_1$ sensitivities at a cost of $$ O\left(\text{nnz}(A) + \frac{n}{\alpha} \cdot d^\omega\right), $$ with an approximation guarantee $\sigma_1(a_i) \leq \widetilde{\sigma}_1(a_i)\leq \frac{\alpha}{n} \mathfrak{S}_1(A)$ for all $i\in [n]$. As the reviewer noted in their review, naively computing all sensitivities would cost $O(n \cdot (\text{nnz}(A) + d^\omega))$. Thus, **our obtained runtime greatly improves over the trivial and (to our best knowledge) only known result**. Our proof sketch for this runtime uses the construction of $\ell_1$ subspace embeddings [4] and reduction to solving a $d\times d$ linear program (and applying [5] to solve it). Please see the top-level response for the details.
-----------------
### "Usefulness of Theorem 1.2": Sample Complexity ###
For the case $p > 2$, by using Theorem $1.5$ of [1], we have that $$\text{the sample complexity with approximate sensitivities} = O\left(\alpha^{2p} \cdot \mathfrak{S}_p^{2-2/p}(A)\right),$$ as opposed to $$\text{the sample complexity with true sensitivities} = O\left(\mathfrak{S}_p^{2-2/p}(A)\right),$$ and $$\text{the sample complexity with Lewis weights} = O\left(d^{p/2}\right).$$ Assume that $p>2$ is large and the total sensitivity $\mathfrak{S}_p(A)$ is small (say, $\mathfrak{S}_p(A) = d$). Further assume, as a toy example, $n = d^{10}$ and $\alpha = n^{\frac{1}{10 p}} = d^{\frac{1}{p}}$. Then, our approximate sensitivities give a sample complexity of $O(d^4)$, true sensitivities give a sample complexity of $O(d^2)$, and Lewis weights sampling gives $O(d^{p/2})$. Thus, our approximate sensitivities preserve the regression approximation guarantee, while **increasing the total sample complexity by only a small $\text{poly}(d)$ compared to that with true sensitivities (while still being much smaller than that with Lewis weights) and incurring a much smaller cost.**
-----------------
### ”Usefulness of Theorem $1.3$”. ###
Please refer to our top-level response for the full answer; Briefly, one potential application of Theorem $1.3$ is to run the corresponding algorithm to determine the total sensitivity and, only if this quantity is substantially smaller than expected, we may use sensitivity sampling; this is based on the result by Woodruff and Yasuda [1] that in cases with low total sensitivity, sensitivity sampling significantly outperforms Lewis weights sampling.
-----------------
### ”Usefulness of Theorem 1.4”. ###
The maximum sensitivity **captures the importance of the most important datapoint** and finds applications in, e.g., experiment design to detect the most important features and in reweighting matrices for low coherence, which was studied for $p=2$ in [6]. Additionally, it captures the maximum extent to which a datapoint can influence the objective function, thus finding applications in differential privacy [3].
-----------------
### Clarifying notation. ###
Yes, $\rho$-factor subspace embedding means a $\rho$-approximate subspace embedding, i.e., a matrix $S$ such that for all $x$, we have $\|SAx\|_p^p \in (1\pm \rho) \|Ax\|_p^p$. We will include this in the notation section and propagate the change wherever applicable.
-----------------
### Conclusion. ###
We again thank the reviewer for their time and effort in bringing up very pertinent questions; we will incorporate these answers in our manuscript.
Please let us know if any questions remain, and if we answered all the questions, we’d like to respectfully request that the reviewer re-consider their score of our submission. We are happy to answer any further questions.
-----------------
### References ###
[1] “Sharper Bounds for $\ell_p$ Sensitivity Sampling”, David P. Woodruff and Taisuke Yasuda, ICML 2023
[2] “Uniform Sampling for Matrix Approximation”, Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford, ITCS 2015
[3] “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork and Aaron Roth, Foundations and Trends in Theoretical Computer Science 2014
[4] "$\ell_p$ Row Sampling by Lewis Weights", Michael B. Cohen and Richard Peng, STOC 2015
[5] "Solving Linear Programs in the Current Matrix Multiplication Time", Michael B. Cohen, Yin Tat Lee, and Zhao Song, STOC 2019
[6] "Dimensionality Reduction for Tukey Regression", Kenneth L. Clarkson, Ruosong Wang, and David P. Woodruff, ICML 2019
---
Rebuttal Comment 1.1:
Comment: The response addresses some of my concerns, and I'll increase the score. | Rebuttal 1:
Rebuttal: # Top-Level Response #
We thank all the reviewers for their time, effort, and suggestions. Here, we restate some of our key contributions and answer common questions. We also reply to each reviewer individually.
---
## Motivating the problem ##
### Why sensitivities? ###
A common preprocessing step for $\ell_p$ regression ($\min_{x} \|Ax\|_p^p$) involves constructing an $\ell_p$ subspace embedding of matrix $A$. This is usually done via a sampling matrix $S$ that ensures $\|S A x\|_p^p = (1\pm \epsilon) \|Ax\|_p^p$ for all vectors $x$. This reduces our problem's data dimension from the number of rows of $A$ to that of $SA$.
A recent result [1] shows that **a sampling matrix $S$ built using the $\ell_p$ sensitivities of $A$ is much smaller than one built using its $\ell_p$ Lewis weights in many important cases**, e.g., when $p > 2$, or when the total sensitivity is small. Consequently, sensitivity sampling offers significant advantages over Lewis weight sampling, which also extends to many ML applications beyond regression [1, Section 1.1].
However, until our work, **there were no known efficient algorithms to approximate $\ell_p$ sensitivities for general $p>0$.** Our paper initiates a systematic study of algorithms for this task.
### Why total sensitivity? (asked by R1). ###
As alluded to above, when the total sensitivity is small, the sample complexity of sensitivity sampling is much lower than that of Lewis weight sampling [1]. Hence, a quick approximation to the total sensitivity can be used as **a fast test for whether or not to proceed with sensitivity sampling (involving the costly task of calculating all sensitivities).**
Further, for $\ell_2$ sensitivities, the total sensitivity is the rank, which may be used in rank estimation subroutines [6]. The total $\ell_p$ sensitivity is analogous to $\ell_p$ rank, which may be similarly useful.
### Why maximum sensitivity? (asked by R1, R3, and R4). ###
The maximum sensitivity **captures the importance of the most important datapoint** and is used in, e.g., experiment design and in reweighting matrices for low coherence [7]. Additionally, it **captures the maximum extent to which a datapoint can influence the objective function** and is used in differential privacy [5].
---
## Clarification on our results. ##
We first clarify that some statements of our results in the main submission were correct but sub-optimally stated — we have corrected these in our full version (cf. Lines 510 - 519, Lines 72 - 73, Lines 83 - 84, and Line 89 of the Supplementary file). **These sub-optimal statements existed only in our theorems, and the algorithms and analyses originally presented indeed give the correct cost (e.g., cf. Lines 551 - 575 of the Supplementary file).**
### Computing all sensitivities. ###
Our correct main result for $p=1$ (cf. Lines 73-74 of the Supplementary Material) is that we can approximate all $\ell_1$ sensitivities (with additive error $\frac{\alpha}{n}\mathfrak{S}_1(A)$) at a cost of $ O(nnz(A) + (n/\alpha) \cdot d^\omega). $ As **R1** noted, the naive total cost is $O(n (nnz(A) + d^\omega))$. Thus, **our obtained result greatly improves over the naive and (to our best knowledge) only known result**.
### Proof sketch. ###
As a reminder, the proof sketch for this cost is: Constructing an $\ell_1$ subspace embedding $SA$ costs $O(nnz(A))$ [3] (asked by **R3**); plus, computing $n/\alpha$ sensitivities with respect to the $d \times d$ matrix $SA$ costs $(n/\alpha) \cdot d^\omega$ (by reducing one sensitivity computation w.r.t. $SA$ to solving a $d\times d$ linear program and using the current fastest LP solver [4]).
### Sampling with approximate sensitivities (asked by R1 and R4). ###
For $p > 2$, Theorem $1.5$ of [1] implies $$\text{the sample complexity with approximate sensitivities} = O\left(\alpha^{2p} \cdot \mathfrak{S}_p^{2-2/p}(A)\right),$$ as opposed to $$\text{the sample complexity with true sensitivities} = O\left(\mathfrak{S}_p^{2-2/p}(A)\right),$$ and $$\text{the sample complexity with Lewis weights} = O\left(d^{p/2}\right).$$ Assume a large $p>2$ and small total sensitivity, say, $\mathfrak{S}_p(A) = d$. Further assume, as a toy example, $n = d^{10}$ and $\alpha = n^{\frac{1}{10 p}} = d^{\frac{1}{p}}$. Then, sample complexity with approximate sensitivities is $O(d^4)$, with true sensitivities is $O(d^2)$, and with Lewis weights is $O(d^{p/2})$. Thus, our approximate sensitivities preserve the regression approximation guarantee, while **increasing the total sample complexity by only a small $\text{poly}(d)$ compared to that with true sensitivities (while still being much smaller than that with Lewis weights) and incurring a much smaller cost.**
### Computing the total and maximum sensitivities. ###
For the total and maximum $\ell_1$ sensitivities, our runtimes are, respectively, $ O(nnz(A) + d^{\omega + 1/2})$ and $O(nnz(A) + d^{\omega+1}),$ with **no polynomial dependence on $n$** and significantly better than the naive cost.
---
## References ##
[1] “Sharper Bounds for $\ell_p$ Sensitivity Sampling”, David P. Woodruff and Taisuke Yasuda, ICML 2023
[2] “Uniform Sampling for Matrix Approximation”, Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, Aaron Sidford, ITCS 2015
[3] "$\ell_p$ Row Sampling by Lewis Weights", Michael B. Cohen and Richard Peng, STOC 2015
[4] "Solving Linear Programs in the Current Matrix Multiplication Time", Michael B. Cohen, Yin Tat Lee, and Zhao Song, STOC 2019
[5] “The Algorithmic Foundations of Differential Privacy”, Cynthia Dwork and Aaron Roth, Foundations and Trends in Theoretical Computer Science 2014
[6] “Rank Estimation For (Approximately) Low-Rank Matrices”, Niloofar Bayat, Cody Morrin, Yuheng Wang, and Vishal Misra, ACM SIGMETRICS Performance Evaluation Review 2022
[7] "Dimensionality Reduction for Tukey Regression", Kenneth L. Clarkson, Ruosong Wang, and David P. Woodruff, ICML 2019 | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction | Accept (poster) | Summary: This paper aims at reconstructing 3D clothed human models from single images. Current methods rely heavily on 2D image features extracted from the input image, while ignoring information lying in the planes orthogonal to the input image plane. To address this limitation, this paper proposes a new method that utilizes attention mechanisms to construct tri-plane features in order to capture more 3D information. In addition, the authors also introduce a new feature query strategy that combines spatial query and manifold query. Results show that the proposed method outperforms existing baselines, showing high robustness to challenging poses and loose clothing.
Strengths: Current methods typically use 2D convolutional networks to extract pixel-aligned features on the xy-plane, and rely on the networks themselves to infer 3D information along z-axis. In contrast, this paper explicitly considers the other two planes, i.e., the yz-plane and the xz-plane, and extracts features on these two planes as well. I think the insight behind the proposed method is valuable and can inspire future research.
Weaknesses: 1. My biggest concern is the inconsistent results. In Figure 1, the authors demonstrate a nice reconstruction result where geometric details are fairly recovered. However, in Figure 3 of the Supp.Mat., the reconstruction results of the proposed method are really coarse and many geometric details are missing. In addition, this figure shows that the proposed method performs significantly worse than ECON in terms of the front-view normal quality, although they should perform similarly according to Table 2 in the main paper.
2. Although the proposed method outperforms SOTA quantitatively, it comes at a cost of a much heavier network architecture. According to the method description, the proposed method uses multiple networks, including a vision transformer, several attention-based networks, an Hourglass network, an MLP as well as learnable embeddings. Compared to the networks used in PIFu/PaMIR/ICON/ECON, these networks are more complex and much heavier, containing a larger number of learnable parameters. Therefore, the performance gains may be due to "bigger" networks. Unfortunately, I do not see any discussion of method complexity in the paper.
3. Also, large networks may be prone to overfitting. However, it seems that the authors can train such large networks using a relatively small amount of data, i.e., only about 500 scans. It is not clear how the authors achieved this.
4. I think Figure 4 shoud show texture from more view points. The current version only shows the prediction results in the front view, which can be directly queried from the input image and cannot prove the texture estimation performance of the proposed method.
5. The learnable embeddings do not make sense to me. If I understand correctly, the learnable embeddings remain fixed after network training and are the same for different input images. However, for different images, the side view varies dramatically. Therefore, I don't think there exist unified embeddings that fit all input scenarios. I would like to see the visualization of learnable embeddings if possible.
6. Although I agree that solely relying on xy-plane is insufficient to reconstruct 3D models, I am suspicious whether the proposed method really solves this issue or not. According to Figure 2 in the Supp.Mat., the yz-plane and xz-plane feature maps do not contain much valid and distinguishable information compared to xy-plane. In other words, I guess that the learned network still relies heavily on xy-plane to reconstruct 3D models, and the robustness to challenging poses is mainly resulted from the introduction of prior-enhanced queries.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: 1. Since the principle plane is refined after 3D-decoupling decoding, why not refine other planes (i.e., yz and xz planes) as well?
2. The authors keep emphasizing "global correlation" in the paper, but I cannot get the exact meaning of this term after reading the paper. Could the authors provide more detailed explanation or some examples to help reader understand what "global correlation" means?
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors addressed the limitations and potential social impact in the supplemental document.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate your recognition of the valuable insight behind our method and its potential to inspire future research.
We will address your inquiries and concerns point by point in the following responses.
- **Answer to weakness 1**: Thank you for your keen observation. Because similar comments have been raised by other reviewers, we consolidate our response in the global author response "Answer for Unsatisfactory results in Figure 3 of SupMat".
- **Answer to weakness 2**: Our results suggest that the performance improvement primarily arises from the intrinsic properties of the global-correlated 3D-decoupling transformer and the intricately crafted framework, rather than solely the parameter scale:
1. **Our transformer-based feature extractor possesses fewer parameters than the UNet-based counterpart but performs better.**
2. **Our model's time efficiency is comparable to other state-of-the-art (SOTA) methods that utilize implicit functions, while significantly outperforming ECON.**
3. **Please refer to global response for details about these comparisons.** In our forthcoming revisions, we will provide a clearer exposition of this issue in the supplementary material.
- **Answer to weakness 3**:
1. **The training dataset we utilized, derived from THuman2.0, is substantial, which helps mitigate the risk of overfitting.** While it is true that we only selected about 500 scans from THuman2.0 for training, each scan was rendered in 36 views under various environmental lighting conditions. Consequently, the total number of data points used during training reached 18,180 (505 × 36), which is a considerably large dataset.
2. **Using a smaller dataset is also feasible in this field, but it can lead to longer training times.** For example, the S3F model [7], trained for five days on eight V100 GPUs with a batch size of 8, is larger than our model. Notably, our model, trained in two days on an NVIDIA GeForce RTX 3090 GPU with a batch size of 4, achieved competitive results. **S3F achieved SOTA performance using only 245 Renderpeople scans**, rendered five times each, as mentioned in [7] section 4.1 (Table 2, "Only synthetic data").
3. **The current state of this field lacks comprehensive research on the impact of training data size and the potential for overfitting.** We acknowledge and align with the reviewer's attention to this aspect. In the future, we intend to conduct further exploratory experiments to delve into these aspects more extensively.
- **Answer to weakness 4**:
1. **We provide texture results from back side view and various view points in Figure 8 of the main text, Figure 4 in the supplementary material, and the SupMat's accompanying video.**
2. **We have meticulously computed PSNR for the outcomes rendered from multiple viewpoints.**
3. We greatly appreciate your suggestions and we are actively considering adjustments to the layout to incorporate texture results from different angles within the paper.
- **Answer to weakness 5**:
1. **Despite the learnable embedding remaining fixed as query, the input image still serves as both the value and key inputs to the decoder.** This mechanism continues to decode features associated with different planes that maintain correlations with the image, as opposed to features unrelated to the image (refer to section 3.2 lines 166-172).
2. **The renowned object detection model DETR [a] also employs fixed learnable embeddings as queries in its decoder to detect objects in various input images.** This further illustrates the applicability of fixed learnable embeddings across different input images.
3. **In our ablation experiments, the reconstruction quality noticeably deteriorates when the learnable embedding is not employed** (as referenced in Section 4.2, lines 275-277, and "w/o cross-atten" in Table 3(a)). This compellingly underscores the significant role played by the incorporation of learnable embeddings.
- **Answer to weakness 6**:
1. The features from the principal plane are crucial for reconstruction, yet previous methods have not considered incorporating features from the other two planes.
2. We address the challenges posed by heavy reliance on pixel-aligned features by introducing a novel tri-plane feature-based reconstruction approach. This concept, endorsed by reviewers Lf3d, rJsL, and 7DFs, is well-motivated, innovative, and efficacious.
3. The ablation study demonstrates the superiority of utilizing triplane features over relying solely on xy-plane features (2D+hybrid), as evidenced in Table 3(b).
- **Answer to question 1**:
1. Refining yz and xz planes with the input image would merge decoupled features with the xy plane features, contradicting our goal of orthogonal plane feature separation.
2. If we were to independently design networks to refine the XZ and YZ planes without incorporating the input image, we believe that this approach would yield results similar to the impact achieved by introducing additional layers within the decoder. However, this strategy would result in an excessively bulky model, thereby escalating training costs.
- **Answer to question 2**: **The concept of "global correlation" serves as a means to highlight the disparities between transformer and convolution architectures**. Convolutions possess localized receptive fields, whereas transformers utilize attention mechanisms to establish connections across the entirety of input data, fostering global feature understanding (refer to lines 41-45 and lines 275-278). This attribute aids in disentangling the intricate 3D features, thus contributing to the enhanced reconstruction process.
Thank you for your question. We will provide clearer explanations in the article through further revisions.
**Reference**:
[a]. Carion N, Massa F, Synnaeve G, et al. End-to-end object detection with transformers[C]//European conference on computer vision. Cham: Springer International Publishing, 2020: 213-229.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their reply to my questions. My major concerns are addressed in the rebuttal. After reading the rebuttal and other reviews, I would like to raise my rating and vote for a borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for taking the time to reconsider our submission after reading our rebuttal. We are pleased to hear that our responses have addressed your major concerns. We truly appreciate your constructive comments throughout the review process, which have greatly helped in improving our work. | Summary: This paper proposes a new method for the task of single-view 3D human reconstruction. The main idea is to extract 3D tri-plane features from the input image using a transformer-based architecture, instead of using CNN to extract 2D pixel-aligned features as done in previous works. The experiments demonstrate that the proposed method enables more accurate and detailed reconstruction compared to SoTA.
Strengths: **Method:** The method is technically sound. Using tri-plane representation and using a transformer to generate side planes is an interesting design choice and seems to be effective.
**Experiment:** The results are convincing. The authors compare the method with SOTA methods following standard protocol and report a noticeable improvement over SOTA. The ablation study is very well designed and clearly demonstrates the effectiveness of each individual component.
**Presentation:** The paper is well-structured and written. I find it easy to follow and understand.
Weaknesses: I am overall positive about this paper because the idea of using transformers to extract tri-plane features from 2D images is well-motivated and technically sound, and the idea has been very well executed and validated.
I don’t find any critical flaws in the paper. The only weakness is that the back side texture is still blurry but this is a common problem for all existing works.
However, since the proposed method is an adaption of well-established methods (tri-plane, transformer) to a specific task, the originality and the potential impact are limited IMO.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: 1. Most previous methods predict frontal and back normal maps to help 3D human reconstruction. This paper does not have this additional step, which simplifies the framework and is good. However, it would be helpful if the authors could comment on if the proposed method is compatible with normal map input. And could a normal map further improve the result?
2. Since the method can reconstruct loose clothing and the authors have demonstrated animation, I am wondering what the animation of loose clothing (e.g. long dresses) look like.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: There is no discussion about the limitation of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your recognition of the technical soundness of our method and the effectiveness of our design choices. Your acknowledgment of our convincing results and well-structured presentation is highly appreciated.
We will address your inquiries and concerns point by point in the following responses.
- **Answer to question 1**:
1. We sincerely appreciate your recognition of our work. In our research, **we do utilize normal maps within our framework, as mentioned in Section 3.3 (lines 213-215) and Supplementary Material Section A (lines 23-25).**
2. **Since the emergence of [6], the integration of normal maps has gained traction within the realm of 3D human reconstruction.** Normal maps can be efficiently generated through available, model-free techniques [6] or guided by SMPL-based approaches, exemplified in [2, 4]. In our study, we chose to employ the pre-trained model from [2] for normal prediction due to its exceptional accuracy and stability. **Building upon the use of normal maps, our model introduces a global-correlated 3D-decoupling transformer to disentangle tri-plane features and a hybrid prior fusion strategy, leading to state-of-the-art results.**
- **Answer to question 2**:
1. We sincerely appreciate your interest in our work and their query regarding the animation of loose clothing. In response, **we provide animation results of loose clothing in the first row of Figure 5 in the supplementary material. Additionally, we have included results in Figure 2 of the PDF attachment in our "global response" in the rebuttal.**
**Explanation:**
1. **The strength of our animation method lies in its simplicity and efficiency, as it eliminates the need for additional models.** By directly animating the mesh within the same framework, our approach achieves natural and smooth animated results that are competitive with or even surpass models requiring separate training for animation.
2. As elaborated in Supplementary Material Section A (lines 45-56), animation relies on the SMPL model, utilizing barycentric interpolation for feature acquisition. For loose clothing, the garment vertices follow the nearest SMPL surfaces' movement, occasionally causing separation in long dresses due to leg movement. **Although this issue is prevalent across this field, resolving it remains a challenge.** We look forward to advancing this field by exploring innovative techniques and dedicated strategies for animating loose clothing.
- **Answer about limitations**: Due to space constraints, **we have included discussions on Limitations and Broader Impact in the supplementary material, specifically in Sections C and D.**
1. **Our model has two main limitations.** Firstly, if the HMR model estimates an inaccurate SMPL pose, it may lead to mesh errors. Secondly, extremely loose clothing in input images can challenge complete garment reconstruction. Refer to Figure 1 in the supplementary material for visual examples.
2. **It is essential to highlight that these limitations are not exclusive to our model; they are challenges faced by many state-of-the-art (SOTA) approaches in the field.** Our focus will be on enhancing robustness, ensuring accurate mesh reconstruction despite pose errors, and improving the reconstruction of loose clothing.
- **Answer for comment "Since the proposed method is an adaption of well-established methods (tri-plane, transformer) to a specific task, the originality and the potential impact are limited IMO"**:
1. **Our adoption of these techniques was driven by the imperative to accurately reconstruct 3D human meshes from 2D images.** Notably, our approach distinguishes itself from prior state-of-the-art (SOTA) methods, representing a novel direction within this field. Empirical validation substantiates our model's attainment of SOTA performance.
2. **Mere aggregation of these techniques proved insufficient in achieving exceptional reconstruction outcomes.** Our focus on the 3D human body reconstruction task led to tailored enhancements of these techniques. Innovations such as the global-correlated 3D decoupling transformer and hybrid prior fusion strategy were meticulously integrated to meet our objectives, culminating in the observed SOTA performance.
3. **The evaluations of Reviewers Lf3d and rJsL unanimously underscore the novelty inherent in our approach, addressing the limitations that have persisted in the realm of 2D feature-based methods**. Their recognition underscores our method's significance in overcoming the constraints associated with traditional reliance on 2D feature methodologies.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns are well addressed and hence I will keep my positive rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and for your positive feedback. We are glad to hear that our responses addressed your concerns. We truly appreciate your constructive comments throughout the review process, which have greatly helped in improving our work. | Summary: This paper presents a single image human reconstruction method. Different from previous pixel-aligned 2D features, authors propose to extend the 2D features to 3D features using triplane. Moreover, SMPL prior is introduced to enhance the extracted features. Qualitative and quantitative experiments are thoroughly performed to show the effectiveness of the proposed method.
Strengths: - The paper is well written and easy to follow.
- The 3D-decoupling decoder is well-designed and proved to be effective. Especially, the side views have noticeable advantage compared with methods using pixel-aligned methods.
- Taking the 3D human prior into consideration is reasonable. Though, I might have some concerns with it (see weaknesses).
- The experiments are thorough and compelling. SOTA performance is achieved on standard benchmarks.
Weaknesses: - The 3D human prior rely on accurate SMPL estimation, which is not quite reliable especially for hard poses. Therefore, a discussion on how would the model perform if the input SMPL is not accurate should be included.
- I have no complaints on the geometry part. But the texture reconstruction results are not satisfactory. Compared with NeRF-based methods (e.g. SHERF: Generalizable Human NeRF from a Single Image), the rendering quality is inferior. I understand that this is caused by the technical choices. But I think this deserves some discussion. This can also make a potential future direction.
- Qualitative ablation studies on more complex poses are expected. From qualitative comparison with other methods, geometry qualities of arms are better. My guess is that the SMPL prior might be the main contributor here. Therefore, I think more qualitative ablation studies is in need here.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: N/A
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: No. Authors should discuss more on the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive feedback on the clarity of our paper and the effectiveness of our 3D-decoupling decoder. Your recognition of our thorough and compelling experimental results is highly valued. We are grateful for your comments.
We will address your inquiries and concerns point by point in the following responses.
- **Answer to weakness 1**: We appreciate your insightful comment, and we fully agree that accurate SMPL estimation is crucial for achieving precise 3D human reconstruction. Due to the constraint of page limitations, **we have included discussions on limitations and broader impact in the supplementary material (please refer to Sections C and D).**
1. **We adopt the SMPL parameter optimization process from [2] to mitigate inaccuracies arising from HMR estimates** (as indicated in Section A, lines 8-10 of the Supplementary Material). Our goal is to attain the utmost accuracy in reconstruction outcomes. Yet, as illustrated in Figure 1 of the supplementary material, substantial inaccuracies in HMR-derived SMPL estimations can result in erroneous reconstruction outcomes.
2. **It's crucial to emphasize that this limitation is not unique to our model; rather, it's a common challenge encountered by numerous cutting-edge approaches in the field.** The issue of handling inaccuracies stemming from SMPL estimation while leveraging SMPL priors for reconstruction and bolstering model robustness is a vital research avenue. This area will be a central focus of our forthcoming research efforts.
- **Answer to weakness 2**:
1. **Our work follows the previous research of PIFu, ICON, and similar methods, with a focus on enhancing the accuracy of reconstructed geometry.** Thus, we employ mesh as our 3D reconstruction.
2. **Despite this, we have maintained vigilant attention to the advancements in the field of NeRF.** We concur that leveraging NeRF may yield promising advancements in texture reconstruction, positioning it as a compelling avenue for future research. However, NeRF currently exhibits several drawbacks in comparison to the 3D mesh representation we employ, such as storage, transmission, and real-time rendering challenges.
3. **We posit that the primary constraint hindering the application of NeRF in the digital human domain is its inability to accurately reconstruct geometry.** This limitation restricts the utility of NeRF-based approaches in applications such as animation and interactive environments. Enhanced establishment of the relationship between NeRF and spatial geometry structures could potentially lead to significant advancements for NeRF-based methods.
Thank you for your suggestions. We will incorporate relevant discussions in the revised version of this paper.
- **Answer to weakness 3**: **The utilization of human body priors in studies related to digital humans is a prevailing consensus in current research.**
1. Among the methods compared, including PaMIR [3], ICON [2], ECON [4], and our proposed approach, all utilize SMPL(-X) as a human body prior. In contrast to early methods like PIFu [1] and PIFuHD [6] that lack such a human body prior, incorporating the human body prior provides a robust constraint for reconstruction results, preventing the generation of meshes that deviate from the anatomical structure when dealing with complex poses.
2. Currently, methods that do not utilize human prior exhibit a significant performance lag compared to those leveraging human priors. Referencing Figure 3 in the supplementary material or Figure 1 in our global response PDF can illustrate this point.
- **Answer about limitations**: Due to space constraints, **we have included discussions on Limitations and Broader Impact in the supplementary material, specifically in Sections C and D.**
1. **Our model has two main limitations.** Firstly, if the HMR model estimates an inaccurate SMPL pose, it may lead to mesh errors. Secondly, extremely loose clothing in input images can challenge complete garment reconstruction. Refer to Figure 1 in the supplementary material for visual examples.
2. **It is essential to highlight that these limitations are not exclusive to our model; they are challenges faced by many state-of-the-art (SOTA) approaches in the field.** Our focus will be on enhancing robustness, ensuring accurate mesh reconstruction despite pose errors, and improving the reconstruction of loose clothing.
---
Rebuttal Comment 1.1:
Comment: I thank all authors for providing thorough rebuttals. My concerns are properly addressed or responded. I would keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for considering our rebuttals. We are pleased to hear that our responses addressed your concerns. We truly value your feedback, which has been instrumental in refining our work and providing insights for guiding the direction of our future research. | Summary: The authors propose to reconstruct a clothed 3D human avatar from a single 2D image by introducing a global-correlated 3D-decoupling transformer to disentangle tri-plane features. A hybrid prior fusion strategy in the feature query phase is introduced to combine the spatial query’s localization capabilities with the prior-enhanced query’s ability to incorporate knowledge of the human body prior. The experiments on CAPE and THuman2.0 datasets demonstrate the effectiveness of the proposed method which outperforms state-of-the-art approaches in both geometry and texture reconstruction.
Strengths: * The idea to introduce a global-correlated 3D-decoupling transformer to disentangle tri-plane features is novel.
* The comparison experiments and ablation experiments are comprehensive and sound.
Weaknesses: * In Figure 3 of the supplementary material, the result of GTA contains unnatural clothes wrinkles, while ECON performs better. It seems that the result in Figure 3 of the supplementary are worse than the results in Figure 4 of the main paper, as the former contains fewer geometric details.
* In Table 1, the proposed method is slightly better than ECON in Chamfer and slightly worse in Normals. The explanation about this comparison doesn’t make much sense to me, and the quantitative improvement is minor.
* The paper lacks an adequate explanation of the benefits of introducing 3D features via tri-plane or the query strategy.
One minor issue:
* The predicted normal and SDF are only explained in Equation 7, and they should also be included in Figure 3.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: * In Table 1, I am wondering why are the scores of ECON on Thuman2.0 dataset much worse than CAPE-NFP and CAPE-FP.
* Please explain the difference between the visualized results in Figure 3 (supplementary material) and that in Figure 4 (main paper), as mentioned in 'Weaknesses'.
* What is the method to predict the normal map in this work?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 3 good
Limitations: The inaccurate estimating SMPL(-x) model would affect the reconstruction. And the performance may degrade if the subject is with extremely loose clothing that considerably deviates from the human body.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are truly grateful for your recognition of the novelty in our approach of introducing a global-correlated 3D-decoupling transformer. Your acknowledgment of our comprehensive and sound experimental comparisons is highly appreciated.
We will address your inquiries and concerns point by point in the following responses.
- **Answer to weakness 1 and question 2**: Please refer to global response "Answer for Unsatisfactory results in Figure 3 of SupMat".
- **Answer to weakness 2**:
1. **The predominant factor contributing to GTA's surpassing performance over ECON [4] in terms of Chamfer Distance and P2S metrics, lies in our consideration of features of the orthogonal planes during reconstruction.** Firstly, both of these metrics are employed to assess the large geometric differences. As exemplified by the results presented in Figure 3 of the SupMat, ECON tends to exhibit larger errors in the orthogonal planes compared to GTA. These orthogonal plane errors manifest in three-dimensional space as overall surface shifts, significantly influencing chamfer distance and P2S metrics.
2. **ECON's superior performance on Normals can be attributed to its utilization of explicit integration to predict the front and back depth maps.** The resulting depth maps harbor a multitude of high-resolution details, thus yielding better Normals in front and back normal maps. However, ECON's Normals in the orthogonal planes are generally worse than GTA's (refer to Section A.4, lines 404 - 407 in the SupMat), consequently leading to variances in overall Normals metrics depending on the dataset. In simpler datasets such as CAPE [16], characterized by minimal occlusion, differences in Normals of orthogonal planes are marginal. Thus, the Normals metric of ECON is slightly better. Conversely, in more intricate datasets like THuman2.0 [15], characterized by complexity and substantial occlusions, the disparities in the normal maps are pronounced, resulting in GTA exhibiting better overall Normals.
3. **As an implicit function-based monocular human reconstruction method, our quantitative improvements are significant.** While our quantitative metrics exhibit relatively minor enhancements compared to ECON on CAPE-NFP, it's important to note that ECON is an explicit optimization-based approach that demands substantial computational resources. As demonstrated in the global author response, GTA's inference time is notably superior to that of ECON and comparable to other implicit function methods. Moreover, GTA outperforms ECON in the evaluation results on the CAPE-FP [16] and THuman2.0 [15] datasets.
Thank you for your feedback. We will incorporate more relevant discussions in the revised version of this paper.
- **Answer to weakness 3**: **While briefly addressing these in Introduction, lines 27-49, page constraints led us to emphasize testing results.** We value this chance to provide a more thorough explanation, enhancing transparency and understanding of our methodology.
1. The tri-plane representation is pivotal in our method. 2D approaches struggle with complex 3D details, especially in challenges like occlusion. **Encompassing xy, yz, and xz planes, the memory-efficient tri-plane approach (see intro, lines 38-41) provides a comprehensive view of 3D structure from a 2D image.**
2. **Our model gains spatial comprehension by extracting features from distinct planes** (refer to lines 118-122). Crucial for precise 3D reconstruction, it's especially valuable in complex scenarios where depth matters. Tri-plane technique utilizes varied spatial data, improving clothed avatar reconstruction, especially in challenging poses.
Thank you for your feedback. We will incorporate more detailed explanations in our future revisions.
- **Answer to question 1**: **The primary underlying factor for this disparity is the variations in data distribution between these datasets.**
1. In comparison to CAPE [16], the THuman2.0 [15] dataset features a higher density of mesh points, more complex attire and poses, as well as more occlusion. Consequently, during the stitching process, the proportions occupied by front and back depth maps are diminished, leading to a higher proportion of the regions being represented via SMPL-X [19] or IFnet+. This, in turn, contributes to an overall augmented error due to the complexity of the THuman2.0 dataset.
2. Meanwhile, the intricacy of THuman2.0 also results in offsets between the SMPL-X predictions and the front and back depth surface predicted by ECON [4]. This discrepancy makes it more susceptible to stitching errors, as illustrated in Figure 8.B of the ECON paper. This scenario further contributes to the deterioration of ECON's quantitative metrics on the THuman2.0 dataset.
- **Answer to question 3**: In our work, we employ the pretrained body-guided normal prediction network from [2] to generate the normal map (refer to Section A, lines 23-25 in the supplementary material).
**Explain:** **Since the emergence of [6], the integration of normal maps has gained traction within the realm of 3D human reconstruction.** Normal maps can be efficiently generated through available, model-free techniques [6] or guided by SMPL-based approaches, exemplified in [2, 4]. In our study, we chose to employ the pre-trained model from [2] for normal prediction due to its exceptional accuracy and stability. **Building upon the use of normal maps, our model introduces a global-correlated 3D-decoupling transformer to disentangle tri-plane features and a hybrid prior fusion strategy, leading to state-of-the-art results.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. It addresses some of my concerns, including the comparison with ECON and the benefits of the proposed modules. So I raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for reconsidering our rebuttal. We are grateful for your feedback and are pleased to hear that our responses addressed some of your concerns. Should you have any further questions or outstanding concerns, please let us know. We are committed to addressing any remaining issues and will make every effort to respond promptly over the next few days. | Rebuttal 1:
Rebuttal: - **Comparison of inference time:**
1. **Table 1(b) in the attached PDF displays comparable time efficiency of our implicit function-based model with PIFu, Pamir, and ICON.** In contrast, ECON, relying on explicit methods, demands more time due to d-BiNI and Poisson inefficiencies. CAPE-NFP dataset with 256^3 resolution is used for testing, ground truth SMPL/SMPL-X provided.
2. The omission of inference time statistics in our initial submission was due to the following reasons:
- **Previous research in 3D human mesh reconstruction has primarily emphasized result quality rather than inference time.** Notably, many state-of-the-art methods (PIFu [1], PIFuHD [6], Pamir [3], ICON [2], ECON [4]) have not conducted time cost comparisons or prioritized efficiency improvements.
- **The superior time performance (compared to ECON) is likely due to the inherent efficiency of implicit function based methods.** It is not the primary contribution of our work, and due to page limitations, we did not include the inference time statistics.
- **Comparison of number of parameters:**
1. **In Table 1(a) of the attached PDF, it's evident that our model outperforms the UNet-based approach with fewer parameters, as indicated by better reconstruction results** (refer to Table 3(a) and Supplementary material lines 38-41). This highlights the advantages of our transformer-based design in extracting 3D features.
2. **Our decision to initially forgo a direct comparison of model parameter sizes was influenced by the prevailing trend in the field, where seminal works (e.g., ICON [2], Pamir [3], S3F [7]) seldom engage in such comparisons**.
3. However, we concur with your perspective on the importance of model parameter size comparisons. Such an approach would clarify the factors contributing to performance improvements, thereby fostering more robust development within the domain of 3D human reconstruction.
- **Answer for Unsatisfactory results in Figure 3 of SupMat:**
1. **The unsatisfactory results of GTA in Figure 3 of the SupMat are a direct consequence of our utilization of a lower Marching Cubes resolution for mesh generation from the implicit function network.** Specifically, in Figure 3 of SupMat, we employed a resolution of 128 for marching cubes. In contrast, for inference results such as Figure 4 of the main paper, we adopted a resolution of 512 for marching cubes. This parameter discrepancy is accountable for the incongruity between the results presented in these two figures. While all implicit function-based methods are constrained by the marching cubes resolution and can only reconstruct results with low spatial resolutions, the results of ECON[4], which don't rely on marching cubes, appear much better.
2. **In Figure 1 of our rebuttal response PDF, we have supplemented revised normal map results and showcased the impact of marching cubes resolution on GTA and ECON[4].** In subfigure a, we provide rendered normal maps under the setting of quantitative testing (resolution of 256), along with PIFuHD[6]'s test results. In subfigure b, we illustrate various resolutions' effects on GTA's reconstruction results, elucidating the causes of detail loss. In subfigure c, we demonstrate the influence of different resolutions on ECON's results, explaining why ECON's normal maps retain more detail.
**Explanation:**
1. **The reason we employed a resolution of 128 for marching cubes in Figure 3 of the SupMat is that the primary focus of this image is not on geometric details but rather on the main contribution of our paper, which is the enhancement in orthogonal plane geometry reconstruction achieved by GTA.** Within this image, we present a compilation of over 400 normal maps, each possessing a relatively compact scale. Therefore, we do not place strong emphasis on the geometric details of the normal maps. Moreover, taking into consideration that employing a resolution of 512 for marching cubes yields meshes that are approximately 20 times larger in terms of storage occupancy compared to the case of a resolution of 128, we opted for the adoption of a lower marching cubes resolution.
2. **The more detailed normal maps observed in ECON's results in Figure 3 can be attributed to its utilization of explicit reconstruction methodologies, which doesn't necessarily involve the application of marching cubes during the reconstruction process.** Among the comparative methods, namely PIFu [1], PIFuHD [6], PaMIR [3], and ICON [2], as well as our proposed GTA , the final mesh reconstructions are all generated using marching cubes. However, as detailed in Section 3.3 of ECON[4]'s paper, the method employed by ECON to reconstruct the mesh involves stitching the predicted front and back depth maps with SMPL-X[19] and optionally IFnet+ (referred to as $ECON_{EX}$ and $ECON_{IF}$, respectively). IFnet+ is an implicit function network that employs marching cubes in its predictions and is influenced by the marching cubes resolution. However, IFnet+ primarily affects regions where both front and back depth maps are unavailable or occluded parts, constituting a small portion of the reconstruction mesh during stitching process. The majority of the mesh is determined by the depth maps with a resolution of 512x512. As a result, the reconstruction results of ECON in the figure still preserve a multitude of high-resolution details.
3. We sincerely appreciate the reviewer's keen observation and concerns. **We acknowledge that Figure 3 in the supplementary material may potentially lead to misunderstandings. To address this, in the upcoming revision, we will ensure uniformity by setting the marching cube resolution to 256 for all results**. While ECON [4]'s performance is relatively unaffected by marching cube resolution, our reconstructed results at this resolution are competitive with ECON's, particularly showcasing improved performance in side-view reconstructions.
Pdf: /pdf/2e85c1112d39e55d953885decb4fb36900045b56.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: The proposed system extracts global features with a 3D transformer backbone for tri-plane feature-based 3D human reconstruction. Unlike existing methods that mainly rely on 2D CNN pixel-aligned features, this paper is the first one to use Vision Transformers for decoupling the 3D tri-plane features for refined reconstruction. Qualitative and quantitative experiments on the 3D human reconstruction dataset demonstrate the state-of-the-art performance of the proposed method.
Strengths: 1. The paper presentation is good, I had a pleasant time reading the paper.
1. The idea is well-motivated. The proposed method tries to solve the issues raised by mainly relying on pixel-aligned features and bad query methods of existing methods and introduces a tri-plane feature-based reconstruction method with a transformer-based network to mitigate these limitations.
1. Experiments demonstrate that the proposed method outperforms existing state-of-the-art methods both qualitatively and quantitatively.
Weaknesses: 1. This method uses a Vision Transformer-based feature extractor, which I expect has a higher model capacity than the CNNs used in previous methods. Is the performance improvement brought by the global-correlated nature of the transformer or due to the larger model capacity? Plus, a larger encoder may introduce higher inference time and also make the comparison with other methods unfair. It would be nice if the authors could add some inference time statistics of different methods to make the comparison more comprehensive.
1. How did you obtain the SMPL prior at the test time on Thuman 2.0? Is the SMPL obtained by an HMR method or using the ground-truth SMPL in Thuman 2.0? As the ground-truth SMPL is not available at test time, using the ground-truth SMPL would make the comparison unfair, especially for the methods in Figure 5. Please add some discussions on the SMPL condition. If the authors are using the ground-truth SMPL as the condition, it would be interesting to investigate the robustness of the proposed components (e.g., the querying and fusion strategy) when the SMPL condition is obtained with off-the-shelf HMR methods.
1. In Table 1, what is the difference between ECON and ECON* as they both trained on THuman2.0? The authors mention that * results are obtained from [2,4], but I cannot find PIFuHD* results trained on THuman2.0 in [2,4].
1. Normals metric (0.050) on Thuman2.0 in Table 1 seems to be calculated by averaging 6 view normals as it matches the results reported in Table 2. However, I think the original numbers in [2,4] are calculated by averaging normals of 4 views (without the above and below views). Does that indicate that there is some evaluation discrepancy between the evaluation metrics of this paper and previous ones?
1. As ICON/ECON released their code and models trained on Renderpeople, their performances on Thuman2.0 and CAPE-FP should be better provided.
1. In Figure 3 in the supplementary material, PiFUHD results should also be shown. And from Figure 3, I found the results of the proposed method are of lower spatial resolution and produce less detailed geometry than ECON. These low-resolution results do not match the results in other figures. The authors should give some explanations.
1. Minors
- The texts in Figure 3 are too small. Consider enlarging the figure for better readability.
- Figure 2 should compare different methods with the same input. Pairwise comparison gives the impression that the authors are cherry-picking the results.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: I mainly have some concerns regarding more rigorous and fair comparisons with other methods as written in the weaknesses section. Please make them more clear in the rebuttal.
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations are properly discussed in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your positive remarks on our paper's presentation and the recognition of our method's effectiveness in addressing existing limitations. Thank you very much for your valuable comments!
We will address your inquiries and concerns point by point in the following responses.
- **Answer to weakness 1**: Please refer to global response "Comparison of number of parameters" and "Comparison of inference time".
- **Answer to weakness 2**:
1. All our experiments are carried out under equitable conditions.
2. For our geometry testing with open-source models, as delineated in Table 1, we utilized the ground-truth SMPL/SMPL-X, consistent with the methodologies employed by ICON [2] and ECON [4].
3. In the texture performance evaluations with non-public models, such as S3F [7] in Figure 5, we abstained from using ground-truth SMPL and instead leveraged PyMAF [25] to derive the SMPL prior.
**Explanation:**
1. **Many current leading-edge works, like ECON [4], utilize ground-truth SMPL for geometry comparisons, and we align with this approach in our geometry assessments (Table 1).** This choice helps counteract errors stemming from the HMR method, ensuring a more accurate evaluation of reconstruction quality across diverse model frameworks.
2. For Figure 5 experiments, with most models not being open-sourced, we used data from S3F's texture comparison (Table 3 in S3F [7]). S3F confirms not using ground-truth human priors in testing (Section 4.2 in [7]), and their use of GHUM instead of SMPL precludes using the same HMR method. **We selected PyMAF [25] for SMPL estimation from input images to ensure fairness in testing.**
3. We appreciate your astute observation. In subsequent revisions, we will elucidate our experimental settings more clearly and emphasize the fairness of our evaluations.
- **Answer to weakness 3**:
1. The "ECON" outcomes were acquired by evaluating on our hardware using the checkpoint provided by the ECON authors as they did not release training code. Conversely, the "ECON*" results are directly extracted from their original paper (see Table 1 in [4]).
2. We deeply apologize for the unintended mistake, attributed to LaTeX formatting. We highly appreciate your meticulousness and will rectify this in the revised manuscript. **For clarity, consult [10] results (chamfer: 2.008, P2S: 1.965), where PIFuHD is assessed on THuman2.0. This oversight doesn't lessen GTA model's merits**.
- **Answer to weakness 4**:
1. **We followed the established settings of previous works for widely used datasets.** The normal metrics in Table 1 for CAPE-NFP and CAPE-FP were computed by averaging normals from 4 views, aligning with methodologies in [2, 4].
2. **For our newly adopted THuman2.0 test set, we introduced a new standard.** Based on empirical observations, we found that incorporating normals from 6 views provides a more comprehensive model evaluation.
3. We appreciate your keen observation. Your feedback has highlighted a potential ambiguity in our manuscript regarding the normal testing methodology. To maintain uniformity and preclude any potential misinterpretations, we will standardize the normal results to 4 views in our revised manuscript.
- **Answer to weakness 5**: We employed the ECON [4] model trained on THuman2.0 (not Renderpeople) due to unavailable training code (Table 1, "ECON"). For ICON [2], we used their released model and evaluated it on the specified datasets.
**Explanation:** Our decision to initially omit the test data for the provided ICON model was driven by two primary considerations:
1. The released ICON model underperformed our THuman-trained version and their paper's reported results, so we omitted it.
2. We refrained from using the proprietary Renderpeople dataset due to its commercial and non-public nature, aligning with our commitment to promoting academic research. We appreciate your suggestion and will consider incorporating this result into the paper while still showcasing our model's superiority.
| | Chamfer | P2S | Normals |
| :-------- | :------ | :---- | :------ |
| CAPE-NFP | 1.207 | 1.256 | 0.071 |
| CAPE-FP | 1.165 | 1.216 | 0.073 |
| THuman2.0 | 1.395 | 1.527 | 0.115 |
- **Answer to weakness 6**: We add PIFuHD results in the attached PDF. Thank you for your keen observation. Because similar comments have been raised by other reviewers, we consolidate our response in the global author response "Answer for Unsatisfactory results in Figure 3 of SupMat".
- **Answer to weakness 7**: Thank you for your valuable feedback. We sincerely appreciate your insights, and we will certainly make the necessary revisions based on your suggestions.
---
Rebuttal Comment 1.1:
Comment: I sincerely thank the authors for providing a detailed response to my concerns. Most have been well addressed (inconsistent visual results, evaluation discrepancy, inference time, experiment settings, and minor issues). I believe wrapping up a final version with addressed issues will make the paper more rigorous and inspiring for further research. I noticed that other reviewers may have doubts about the novelty of the paper, and the authors are encouraged to further clarify the paper's merits. I look forward to the released model/code if it gets in.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your thorough review and constructive feedback on our manuscript. Your insights have significantly contributed to improving the quality of our work, and we are thankful for your time and effort.
In light of your valuable comments:
1. **We promise to incorporate all the addressed issues into the final version of the paper.** We believe that these adjustments will not only refine the content but also better highlight the significance of our research.
2. Pertaining to the concerns on novelty, while we believe our work introduces distinct and fresh perspectives to the field, we recognize the importance of making these contributions apparent to all readers. **In our rebuttal, we've provided detailed clarifications on the novelty of our approach (specifically in answer to weakness 3 of rebuttal with rJsL, and in answer to weakness 5 and question 2 of rebuttal with 6dkA). We pledge to further accentuate these unique merits in our finalized draft.**
**Lastly, we firmly commit to releasing our model/code as soon as our paper is accepted to foster further research and replication in the community.**
Thank you once again for your thoughtful review. We genuinely value the opportunity to enhance our work based on your recommendations. | null | null | null | null | null | null |
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training | Accept (poster) | Summary: The paper conducts an extensive empirical study including several popular pruning criteria and analyzes their impact on the DST framework on diverse models. They found that within a stable DST hyperparameter setup, the majority of the studied criteria
perform similarly, regardless of the model architecture and the selected growth criterion. In addition, in very sparse regimes, the simplest magnitude-based pruning methods surpass any more fancy choices.
Strengths: 1. The paper is well-written and the key research questions examined are well-supported by extensive experiments and appendix results.
2. Experiments related to how the frequency of topology updates influences the effectiveness of different pruning methods are interesting.
Weaknesses: The primary concern related to the submission is the lack of novelty. Although some observations will be very beneficial to the sparsity community, the supremacy of magnitude-based weight removal in the high sparsity range is not novel. I believe extending this paper beyond small-scale models, to mid-to-large-scale models will make the paper more empirically strong and relevant in the current state of deep learning. In addition, some analysis of growth trajectory, connections which were never removed or always remained pruned across different pruning staretgies will add more value and uplift the quality of the work. Overall, it is still a strong empirical paper.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: The paper's readability will benefit by including important hyperparameters like prune and growth ratios etc in tabular format in the main draft. When written with experimental discussion, it becomes difficult to follow up.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response Reviewer 3Q28**
We are grateful to the Reviewer for the thoughtful comments and feedback. We are pleased to hear the reviewer considered our research questions well-examined by the extensive experiments we performed. We are also thrilled to see that the Reviewer found our experiments on the topology updates interesting and recognizes our work as a strong empirical paper. Below we address the raised concerns:
*The primary concern related to the submission is the lack of novelty.* - We respectfully ask the reviewer to recognize the diverse possibilities for novelty within scientific research. While introducing new algorithms to address problems is undoubtedly one facet of it, comprehending existing solutions holds equal significance. Our thorough and comprehensive study encompasses novelty by asking questions that have not been previously studied, such as to what extent the performance of DST is guided by the pruning criterion, and how the pruning criterion interacts with the choices of DST hyperparameters.
We kindly encourage the Reviewer to consider that the significance of a weight (and thus the pruning criterion) can diverge from that in standard post-training pruning due to the parameter changes inherent in the DST framework (elaborated upon in lines 43-46 of the paper). Therefore it could be potentially dangerous to transfer the knowledge from standard (or static) pruning into DST without careful consideration. It is also worth noting that no definitive superiority of the magnitude criterion for DST has been previously firmly established, considering the emergence of various pruning criteria for DST in recent years. Our research offers valuable insights into the DST framework, suggesting that updates need not occur at such high frequencies and that more intricate pruning criteria do not necessarily outperform simpler magnitude-based ones. The originality and novelty of our work have been acknowledged by Reviewers xwEQ and mfmv. Therefore, we earnestly appeal to the Reviewer to assess our work within the broader context of innovation.
*I believe extending this paper beyond small-scale models, to mid-to-large-scale models …* - We consider the VGG-16, ResNet-50, and Efficient-Net models as large enough to consider them at least “mid-scale” models (all models have above 10M parameters, with ResNet50 having 25M parameters). Both ResNets and Efficient-Nets are also commonly used in modern research, and we have chosen them in our evaluations because they have been included as the largest models in most of the works that used the studied pruning criteria (see Yuan et al. 2021, Evci et al. 2019). However, following the Reviewer’s request, we also include an evaluation on the ROBERTa-Large model (354M parameters) on the CommonsenseQA text dataset (please see the “Joint Response” for more details).
*In addition, some analysis of growth trajectory ...* - Thank you for this suggestion. In order to improve our work, we provide additional statistics on the never removed connections. We consider the masks at the end of training for the MLP, CNN, and ResNet-56 models on CIFAR10. Then we measure the number of weights that are always retained by every pruning criterion for the same seed and we divide it by the number of all remaining (unpruned) weights in the model. We do the same on the negation of the masks to get the number of the always removed weights and divide it by the number of all removed weights of the model. We present the results in the Rebuttal PDF in Figure R4. For the MLP model around 23% of all the retained weights are shared across all the models. For CNN, this number is higher and goes up to 37%, while for the ResNet-56 it is equal to 11%. In general, the smaller overlap of the pruning methods for the ResNet model is also something that we observe in the main paper in Figure 6b in the main text. Additionally, we would like to note that in Figure 9 in Appendix I we report how the Jaccard Index between the sets of weights selected for removal changes during the training - perhaps the Reviewer will also find this plot interesting.
**Questions:**
The paper's readability will benefit by … - Thank you, we will add the below table to the Appendix:
| Experiment | Prune Fraction | Prune Fraction Decay Scheduler | Density | Update Period |
|------------|-----------------|--------------------------------|---------------------------------------------------------------|--------------------------------------------|
| Fig. 1 | 0.5 | cosine | 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5 | 800 |
| Fig. 2 | 0.5 | cosine | 0.2 | 800 |
| Fig. 4 | 0.5 | cosine | 0.2, 0.1 | 25, 50, 100, 200, 400, 800, 1600,3200,6400 |
| Fig. 5 | 0.5 | cosine | 0.5 | 800 |
| Fig. 6a | 0.5 | cosine | 1.0 | 800 (measured only once) |
| Fig. 6b | 0.5 | cosine | 0.2 | 800 |
Once again we grateful for the feedback provided by the Reviewer and we hope that our responses have addressed the raised concerns. We kindly ask the Reviewer to reconsider adjusting the score accordingly.
References:
Evci, Utku, et al. "Rigging the lottery: Making all tickets winners." ICML. PMLR, 2020.
Yuan, Geng, et al. "Mest: Accurate and fast memory-economic sparse training framework on the edge." NeurIPS 34 (2021).
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: I have read the author's rebuttal. Some of my concerns are addressed. I update my score to 6.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer for reading our rebuttal, and we are grateful for updating the score. We are happy to provide any additional clarifications if needed. | Summary: This paper performs a systematic study of pruning strategies for dynamic sparse training (DST) methods, comparing their performance and structural decisions across backbone architectures, datasets, structural change frequencies, connection densities, and batch sizes.
Strengths: Originality: This work is the first systematic study on DST methods, combining various pruning and growing strategies that have thus far only been studied in isolation. It is also the first work to structurally compare the results of various methods. It does not introduce any new DST strategies, although it adapts a well-known pruning metric (SNIP) for the DST context and makes novel combinations of known DST strategies.
Quality: This paper backs up claims with adequate experimentation and analysis, although the scope is limited to mostly image classification tasks (plus one tabular dataset).
Clarity: The paper is easy to understand overall, with a clear writing style.
Significance: This paper provides a significant study on the interplay not only of growing and pruning strategies but also factors such as backbone architecture, dataset, structural change frequency, connection density, and batch size.
Weaknesses: I think random pruning (paired both with random or informed gradient-based growth) would be an interesting baseline in addition to dense and statically sparse architectures.
As noted by the authors, expanding to tasks beyond image classification (and one tabular dataset) would broaden the scope of this work.
You claim that growing is more thoroughly studied than pruning, but I’m not sure this is true. In Line 40, Dettmers and Zettlemoyer (2019) is cited as emphasizing the growth criterion but rather, to my understanding, uses random growth with informed pruning. There seems to be a roughly equal number of cited DST works that use simple pruning with more informed growing to those using simple growing with more informed pruning. This paper still has significance by focusing on the pruning method (although more growing methods could be included, but this would quickly increase the experimentation cost), but I think this language should be clarified.
The analysis on update period in Section 5.3 is interesting, but I think should be qualified by noting how the pruning factor schedule and batch size may be confounding factors.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
In addition to the batch size studies, I would be interested to see (in this or future work) how much the batch used for gradient evaluation affects the connections selected for pruning and/or growth in gradient-based pruning/growth methods. This could be studied by performing the training as normal until immediately before the 1st pruning step, then repeatedly resampling a batch and performing a backwards pass (without updating weights) in order to record which connections would be pruned and which would be grown.
Is Figure 2 aggregated over multiple trials per protocol? If so, please add error bars. If not, please run multiple trials.
Is there any way to add error intervals to Figure 6? I like the graphical presentation but would also appreciate error intervals to determine how noisy the metrics are and how significant the differences between strategies are.
Typos and tips (not considered negatively against paper):
* Line 13: “cautions”?
* Footnotes referring to appendices are not necessary: placing the references directly in the main text may save space.
* The paper layout and figure placement was a bit hard to follow. Figures that span across the whole width seem to be placed rather randomly, while they are conventionally placed at the top of a page. This helps the reader read the main text more continuously and find the appropriate figure when referenced.
* Many captions ended with an analysis of the figure, like the last two sentences of the caption of Figure 1: I would prefer to see this in the main text rather than the caption.
* The Jaccard index equation could be moved to an appendix to save space.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Limitations and broader impacts are explicitly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer mfmv**
We thank the Reviewer for the review. We are very happy to see that the Reviewer appreciates the originality and significance of our work, recognizing it as the first systematic study on pruning criteria in DST methods. Furthermore, we are glad to hear our experimental evaluation and analysis have been acknowledged as adequate. We were mostly thrilled to receive such positive feedback. In the following, we address the remaining questions raised by the Reviewer:
*I think random pruning (paired both with random or informed gradient-based growth) would be an interesting baseline…* - We have observed that random pruning paired either with random or gradient growth performs much worse than all of the other criteria and baselines (we have added a visualization of that effect on ResNet56 on CIFAR-10 in Figure R3 in the rebuttal PDF). The performance drops especially dramatically for the very low densities, making it a less interesting baseline. While random may be a quite reasonable choice for the growth (due to influencing the exploration), in pruning it effectively leads to removing arbitrary connections in a trained network. For the above reasons, we initially decided to use only dense and static sparse baselines. We will add the random performance for all the Figures for the camera-ready version.
*As noted by the authors, expanding to tasks beyond image classification (and one tabular dataset) would broaden the scope of this work…* - We have enhanced our empirical evaluation with results on the ROBERTa Large model (~354M parameters) on the CommonsenseQA text dataset. We kindly ask the Reviewer to refer to the “Joint Response” for details.
*You claim that growing is more thoroughly studied than pruning, but I’m not sure this is true…* - Thank you for this comment. Indeed, that phrase can take benefit of some clarifications. What we intended to convey is that pruning criteria have not ever been rigorously compared in DST. At the same time, the interplay between some growth criteria is rather better understood (as demonstrated, for example, by Evci et al., 2019). Additionally, we would like to kindly note that we believe that Dettmers and Zettlemoyer (2019) do, in fact, employ magnitude pruning alongside momentum-based growth (as evidenced by Algorithm 1 in their paper). We think rephrasing lines 38-41 from our paper to "Much of the research concerning design choices in DST focuses on the analysis of methods where variations occur in the growth criterion or the pruning criterion becomes intertwined with other design considerations [3, 10, 9, 1] (...)." will clarify our point. We hope that this adjustment effectively addresses the concerns raised by the Reviewer.
*The analysis on update period in Section 5.3 is interesting, but I think should be qualified by noting how the pruning factor schedule and batch size may be confounding factors.* - Thank you for raising this point. The update period, when considered together with the batch size, can be interpreted as the number of data samples the model sees (and trains on) before performing the mask update. We will adjust the text in the Camera Ready version of the paper to make sure it includes that information. In consequence, we would expect that once we increase the batch size, the best choice of the update period shifts to lower values, keeping the total number of seen examples similar. We have run a quick comparison on the MLP-CIFAR10 setup and observed this behavior (see Figure R1c and R1d in the rebuttal PDF). Regarding the pruning factor schedule: we again run a quick comparison on the MLP-CIFAR10 setup, and try two different schedulers, apart from the default cosine one: the linear one, and the constant one. The linear schedule multiplies every 600 optimization steps the current decay rate by a constant factor (0.99). The constant schedule simply keeps the prune fraction fixed at 0.5 throughout the whole training. The results are in Figures R1a, R1b, and R1c in the Rebuttal PDF. We observe that the choice of the pruning factor schedule does not significantly influence the best value of the update period. We will add those results to the Appendix.
**Questions:**
*I would be interested to see (in this or future work) how much the batch used for gradient evaluation affects the connections selected for pruning and/or growth in gradient-based pruning/growth methods...* - We have run for the rebuttal a quick experiment where we only increase the batch size for the DST update (from 128 to 1024). The results are in Figure R5 in the Rebuttal PDF. We observe that in general, it does not enhance the performance (and works significantly worse with gradient growth than with random growth for the MLP-CIFAR10). We consider the study on the interplay between DST update batch size and the model performance as an interesting direction for future work.
*Is Figure 2 aggregated over multiple trials per protocol?* - Yes, we have run 3 experiments in each setting, and we did include the error bars in the Figure - they are just very small (all the methods perform very stable). We apologize, we will enlarge them in the Figure to make them more visible.
*Is there any way to add error intervals to Figure 6?* - Thank you, this is a valid point. We will add the error intervals to the Appendix.
Typos and tips (not considered negatively against the paper): Thank you for catching the typos and providing the tips. We will enhance our paper accordingly.
We once again appreciate the Reviewer’s favorable rating and constructive feedback, helping us to further enhance the quality of the paper.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your response.
Regarding my question on how much the batch used for gradient evaluation affects the connections selected for pruning and/or growth in gradient-based pruning/growth methods, I meant the batch and not the batch size. Specifically, how sensitive is each method to the samples used for measurement? Your referenced experiment only partially studies this question, as we can assume a larger evaluation batch may give a more stable measurement, but only comparing two batch sizes isn't quite thorough enough to make any conclusions on this topic.
I maintain my initial rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate the Reviewer's response and we apologize for misunderstanding the comment regarding the batch experiment. Exploring the influence of batch samples on the mask, beyond just batch size, is an intriguing question. We view it as an exciting avenue for future research. In response to the Reviewer's suggestion, we conducted a preliminary experiment using the ResNet56-CIFAR10 setup:
We perform the training normally until immediately before the 1st pruning step. Next, we select a batch and conduct the forward and backward passes without updating the weights. We then perform the mask update and save the masks right after the pruning and right after the growth. We undo the update and repeat this procedure for every 10 batches. Then, for each pair of saved batches, we compute the mean Jaccard index of pruned weight sets and calculate the average over all pairs (“pruned mean" in the table below). Additionally, we measure the maximum and minimum Jaccard index obtained by comparing any weights in a layer between any two saved batches (referred to as “pruned max” and “pruned min”). We also calculate the mean of pairwise Jaccard indices after the **complete** mask update (i.e. pruning and gradient growth, referred to as “mask mean”). We present the results in the table below.
| criterion | pruned mean | pruned max | pruned min | mask mean |
|:--|:--:|:--:|:--:|:--:|
| $\mathcal{C}_{magnitude}$ | 1.000$\pm$0.000 | 1.000 | 1.000 | 0.631$\pm$ 0.004 |
| $\mathcal{C}_{SET}$| 1.000$\pm$0.000 | 1.000 | 1.000 | 0.631$\pm$ 0.004 |
| $\mathcal{C}_{MEST}$| 0.989 $\pm$ 0.001 | 1.000 | 0.934 | 0.629$\pm$ 0.004 |
| $\mathcal{C}_{RSensitivity}$ | 0.592 $\pm$ 0.003 | 0.878 | 0.407| 0.562$\pm$0.003 |
| $\mathcal{C}_{SNIP}$| 0.586 $\pm$ 0.003 | 0.838 | 0.464 | 0.517$\pm$0.004 |
As expected, the magnitude and SET criteria consistently prune the same sets across batches due to their reliance on global statistics, unaffected by batch changes. The RSensitivity and SNIP criteria exhibit less overlap in pruned sets between batches (around 0.592 and 0.586 respectively). However, the size of this overlap is relatively stable, as reflected by the rather small standard deviation. We consider extending this preliminary study an interesting direction for future work. We thank the Reviewer for raising our attention to this matter and for the provided positive feedback. | Summary: This paper does a large scale study of dynamic sparse training, primarily on vision datasets. For a variety of models and hyperparameter settings, pruning using magnitude is found to be as or more effective when compared to pruning using more complex criteria proposed in the literature. The performance findings are reinforced by a study of the weights different pruning criteria select, and the impact of topology update frequency is studied.
Strengths: Originality:
1. This paper provides the most comprehensive study of DST pruning criteria that I am aware of, providing a valuable insight about the relevance of the magnitude criterion.
Quality:
2. The experiments are designed well and support the contention that magnitude is a strong criterion in a variety of DST scenarios.
Clarity:
3. The submission is well written.
Significance:
4. The performance comparisons of criteria that have been shown to be effective in the literature (e.g. MEST and magnitude) are valuable for the community.
Weaknesses: This paper does not have any major weaknesses. I believe it's significance to the community (and correspondingly my score) would be best enhanced by ensuring the experiments cover a broad range of relevant architectures. I address this and some more minor issues, along with my suggestions, below.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: 1. In your related work, it is probably worth mentioning that Frankle et al. (2021) did an analysis of similar pruning criteria for static sparse training, also finding that magnitude was as good as more complex criteria.
2. Line 266: An alternative explanation might be that removing a smaller amount of weights leads to less disruption of the batch norm statistics with non-magnitude criteria. After pruning, do you update the batch normalization statistics (e.g., by running the model on the training data without updating its weights, see page 16 of [Zimmer et al., 2023](https://arxiv.org/pdf/2306.16788.pdf))? I ask because removing small magnitude weights may have little effect on these statistics, but removing larger weights (e.g., because their gradients were small) could have more substantial effects on the relevance of the old batch norm statistics to the new model. In which case, it would be difficult to conclude that the criterion is responsible for a certain performance level as that performance level could be attributable (at least in part) to incorrect batch statistics. It might be worth running an experiment to check this (e.g., on EfficientNet).
3. The networks considered cover a broad range but leave out architectures that have become more relevant in recent years. Also, the paper claims to cover large scale convolutional networks, but the largest network is a ResNet-50. Consider running on larger models and different architecture styles, e.g. ViTs.
4. Figures are too far from where they are discussed in the text. Consider adjusting their placement.
5. Lines 163-169 might benefit from being rephrased. As written, they do not provide a clear intuition for the expectation that a gradient based method would do better, which is what I think you are offering in these lines. In contrast, lines 184-185 (in conjunction with lines 177-178) give clear intuition for why I might expect a gradient based method to perform better.
6. Lines 188-189: wouldn't the mask solutions have to be "genuinely different" if the performances associated with the different criteria differed? I would say the prior experiments "do not necessarily indicate" instead of "do not indicate".
7. Line 270 and Figure 5 clash. You say that pruning adds regularization, but Figure 5 shows the dense model has the largest training loss - validation loss. Maybe you meant to define the loss gap as val - training instead of training - val.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 3 good
Limitations: The identified limitations are appropriate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer xwEQ**
We appreciate the feedback from the Reviewer and the positive comments about both the originality and the value of our work. We are happy to see that the Reviewer finds our experiments well-designed and supportive of our main claims. We were also glad to learn that the Reviewer does not see any major weaknesses in our work. We address the remaining questions of the Reviewer below:
1. *In your related work, it is probably worth mentioning that Frankle et al. (2021)* - For clarity, did the Reviewer mean the Pruning neural networks at initialization: Why are we missing the mark?." arXiv:2009.08576, ICLR (2021) paper? If so, it is indeed an important and related study, that contrary to our setup focuses on the **static** sparse pruning. Thank you for raising our attention to it, we will update the Related Work section accordingly.
2. *Line 266: An alternative explanation might be that…* - Thank you for raising our attention to this work. We agree that the bach normalization statistics may influence the pruning during evaluation. Please note, however, that in DST we always prune during training, where the batch normalization statistics are not used. During training, the normalization is computed on the statistics directly from the mini-batch (see Algorithm 1 of Ioffe & Szegedy 2015, and the documentation of TensorFlow and PyTorch - links in the references below). The running normalization statistics only become relevant when we compute the final evaluation accuracy on the test set. But before that happens, the model still uses the iterations between the last mask update and the end of the training to update the running statistics. Therefore at the time of evaluation, they should already be adjusted according to the new mask structure.
3. *The networks considered cover a broad range but…* - We have added a task on a transformer architecture (ROBERTa Large, ~354M Parameters) on the CommonsensQA dataset. Please refer to the “Joint Response” for details. There we also explain why we initially focused on convolutional and MLP-styled architectures.
4. *Figures are too far from where they are discussed…* - Thank you, we will fix that.
5. *Lines 163-169 might benefit from being rephrased…* - Thank you for noticing that. We will rephrase it to: “Since the gradient may provide information on how the weight will change in the future, it indicates the trend the weight might follow in the next optimization updates. Therefore it may be more suitable in the DST approach, where connectivity is constantly evolving and hence the insights into future updates may be beneficial”. We hope this clarifies the statement.
6. *Lines 188-189: wouldn't the mask solutions have to be "genuinely different" if the performances associated with the different criteria differed? …* - Not necessarily. Performance (i.e., accuracy) is only a measure of the end result of the model. A large difference in performance does not have to imply (a priori) a large difference in the mask (e.g., hypothetically, the mask could differ in only a few connections, but those connections could be very significant to the performance). Without studying to what extent the masks of different pruning criteria differ (but considering only performance), we are not able to identify such a situation.
7. *Line 270 and Figure 5 clash …* - Thank you for catching that. We had a typo in the text. Yes, Figure 5 shows the validation loss - training loss.
We have to the best of our effort, addressed all the concerns raised by the Reviewer and followed the suggestions to include more contemporary and large architectures by adding the ROBERTa Large model. We hope that our answers provide the necessary clarifications needed to enhance our paper. We would greatly appreciate if the Reviewer could re-evaluate our work taking into account our response and the fact that the Reviewer finds that there are no major weaknesses in our paper.
References:
Frankle, Jonathan, et al. "Pruning neural networks at initialization: Why are we missing the mark?." arXiv:2009.08576, ICLR (2021).
Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. pmlr, 2015.
tensorflow batch norm: https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization
pytorch batch norm: https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttals and reviews
Comment: I have read the reviews and rebuttals, and I have updated my score. This submission's analysis of dynamic pruning criteria in various contexts is valuable, and my improved score reflects the addition of the ROBERTa analysis and the addressing of my more minor comments.
Comments on your responses to my questions:
1. Yes, that paper. Sounds good.
2. Your response suggests there are enough iterations between the last mask update and the end of training for this to not be a concern, thanks!
3. Sounds good.
4. Sounds good.
5. Sounds good.
6. I agree that a large difference in performance does not imply a "large" difference in the mask, but I think a large (or a small) difference in performance does imply a "genuine" difference in the mask (at least at some point during training, assuming every training facet but the masking is held constant). If you agree, then you could address this concern by saying "significantly different" instead of "genuinely different".
7. Sounds good.
---
Reply to Comment 1.1.1:
Comment: We express our gratitude to the Reviewer for thoroughly reviewing our response. Concerning point 6, we agree that substituting "genuinely" with "significantly" can enhance precision and we will adjust the text accordingly. We are happy that our answers addressed the Reviewer’s concerns, and we thank the Reviewer for raising the score. | Summary: This empirical paper looks into the setting of adapting neural network architecture during training (dynamic sparse training), repeatedly pruning and growing the network. Authors try to draw conclusions about the performance and topology of various pruning criteria. They show that in high density regimes, most of criteria give similar results, with methods that incorporate gradients performing best, but in low density regime, simple magnitude based pruning performs the best
Strengths: - well written, easy to follow
- author tackle the question that has not been investigated before
- the emprical study is rigorous/well executed (modulo my questions/remarks)
-assuming authors opensource their benchmark, it can serve as a test bed for new pruning and /or growth criteria
Weaknesses: - Limited novelty (empirical study of existing methods). Not very deep insights
- mostly vision data and only 1 tabular dataset. No text data
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: This is comprehensive and well executed and I am willing to raise my score if the authors have good intuition for the questions below.
1) My conceptual question is related to line 218 - you are using the same hyper parameters for all pruning criteria. I wonder whether some pruning methods that are more crude for example can benefit from smaller learning rate, etc - basically I am worried that the reuse of all the same hyper parameters is what gives you the results that are all similar to each other. Because in reality, I assume when you train with DST and you set the pruning criteria, you tune the hyper parameters for that chosen criteria.
Alternatively - any indication (for example, from earlier experiments) that the best hyper parameters don’t depend on the pruning criteria?
2) RE: conclusion that gradient information is useful (for pruning criteria) in high density and not so useful in low density regimes (e.g. magnitude based pruning wins in this case) - can’t it be also a function of the batch size? I would think that for very sparse networks, increasing the batch size will improve the quality of the gradient estimates => this might result in pruning that takes gradient into account to perform better
3) Are the time complexities for all the pruning criteria the same too (I don’t think so, e.g. you need to calculate gradients for some while e.g. magnitude based pruning uses just the weights). This would be nice to include somewhere - if they all perform similar, the cheapest to calculate is preferable
Minor:
Line 21 cautions-> careful or cautious
Also I kinda find the title somewhat misleading - it suggestted to me that you will be coming up with a new criteria/method of "finding fantastic weights". Something that makes it clear that it is empirical study of pruning criteria is probably more appropriate (but I do appreciate snappy titles in general)
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 2 fair
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer aJDG**
We are grateful for the Reviewer’s feedback. We are happy to hear that the Reviewer finds that the questions we ask in our study have not been investigated before. We are also pleased to learn that our study has been recognized as rigorous and well-executed by the Reviewer. We will, of course, make our code open source (the code is also in the supplementary materials). Below, we provide answers to the raised concerns:
*Limited novelty (empirical study of existing methods). Not very deep insights* - We kindly request the Reviewer's thoughtful consideration of the multifaceted nature of novelty within scientific research. A researcher's role extends beyond the creation of new methods; it encompasses an understanding of existing ones as well. Exploring established solutions serves to organize current knowledge and offers valuable insights and directions for future endeavors. Our paper exemplifies this approach by focusing on the DST framework and conducting an analysis of how diverse pruning criteria from the literature influence model performance. As aptly noted by the Reviewer, this inquiry has not been investigated before.
Our work, as underscored by the Reviewer, employs a rigorous experimental setup. Notably, we demonstrate that the more intricate pruning criteria do not necessarily hold a distinct advantage over the fundamental magnitude-based approaches. Furthermore, our study indicates that in various scenarios, achieving commendable outcomes necessitates only a modest number of connectivity updates. We kindly implore the Reviewer not to confuse the simplicity of these insights with their lack of significance. Through our analysis, we cast a critical lens on the efficacy of current pruning solutions. Additionally, our results suggest that we can potentially accommodate more computationally intensive solutions for DST, given the infrequent need for updates. These contributions hold value for both present and future researchers in the field. It is noteworthy that the originality and significance of our findings have garnered praise from Reviewers xwEQ and mfmv. Considering the arguments outlined above, we sincerely invite the Reviewer to revisit our work with a broader perspective on novelty.
*Mostly vision data and only 1 tabular dataset. No text data* - We add a text dataset on the task of fine-tuning ROBERTa Large (please refer to “Joint Response” for the details).
**Questions:**
1. *My conceptual question is related to line 218 …* - The way the hyperparameter setup influences DST is indeed an interesting question. Our choice to use the same hyperparameter setup has two motivations. Firstly, using the same setup for all experiments allows us to assess the improvement that comes from changing only the pruning criterion in isolation from any other changes. Secondly, this choice was on par with the typical DST research in which the training hyperparameters (such as learning rate, etc.) are kept fixed, and the same as in the dense models (see for instance the MLP settings on page 4 of SET in Mocanu et al. 2018, Section 4.1 of RigL in Evci et al. 2019, Section 3.3 of SNFS in Dettmers & Zettlemoyer 2019 or page 8 of MEST in Yuan et al. 2021). We do however study a fair amount of different DST-specific hyperparameter choices. For instance, the entire section 5.3 is devoted to studying the impact of the update period hyperparameter. There we observe that the best setting of that parameter is typically common among all the different pruning criteria (Figure 4). We also study some design choices in DST such as local vs. global pruning, the impact of batch size (please see point below), and the MEST lambda hyperparameter in Appendix H, F, and A.3, respectively.
2. *RE: conclusion that gradient information is useful …* - We agree that this is an interesting question and we study it to some extent in Appendix F (we will highlight this matter more in the main text). When increasing the batch size (up to 1024 from the initial 128) we indeed observe an improvement in the performance of gradient-based criteria, especially when paired with random growth. However, the overall maximal performance of all methods decreases -- this may be the result of training with a too-large batch, as a certain level of stochasticity is considered beneficial in deep learning optimization. Note that we reuse the gradients computed in the backward pass (see also point “3” below) and hence the batch size of the training is the same as the one used to compute gradients for the pruning criterion. We have also run for the rebuttal a quick experiment where we only increase the batch size for the DST update (from 128 to 1024). The results are in Figure R5 in the Rebuttal PDF. We observe, that in general, it does not enhance performance.
3. *Are the time complexities for all the pruning criteria the same too …* - We adapt the setting from MEST (Yuan et al. 2021), in which the current gradient is used to compute the score. In consequence, the computation of such a gradient happens naturally during the backward pass and does not induce any additional cost. We will make this clear in the main text.
We hope that all our answers provide the necessary clarifications to the issues raised by the Reviewer. We have also followed the Reviewer’s suggestions to include text data in the comparison. Hoping that our answers have addressed the Reviewer's concerns, we kindly ask the Reviewer to consider increasing the rating.
References:
Mocanu et al. "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science." Nature communications 9.1 (2018).
Evci et al. "Rigging the lottery: Making all tickets winners." ICML 2020.
Yuan et al. "Mest: Accurate and fast memory-economic sparse training framework on the edge." NeurIPS (2021).
Dettmers et al. "Sparse networks from scratch: Faster training without losing performance." arXiv:1907.04840 (2019).
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for your response.
RE: increasing the batch size experiments - I assume once you increased the batch size, you also had to retune the learning rate - or did you still kept it fixed? Fixed might be ok if you use some sort of adaptive optimizer (Adam?Adagrad), but if this is SGD, you will have to re-tune the lr and possibly the number of epochs/steps to get valid conclusions.
In either case, I appreciate the responses and raise my score | Rebuttal 1:
Rebuttal: **JOINT RESPONSE**
We thank all the Reviewers for the time and effort taken to provide valuable insights and comments on our work. We are very glad that our research has been recognized as useful and beneficial to the community (Reviewers Q7vN, xwEQ, mfmv). Moreover, Reviewers xwEQ and mfmv appreciated the novelty and originality of our work, recognizing our comprehensive study of the DST pruning criteria. Our empirical evaluations have been praised as being thorough (Reviewers Q7vN, aJDG), well-designed, and supportive of our main claims (Reviewers mfmv, 3Q28, xwEQ). We are also pleased to learn that the reviewers found our paper to be well-written and easy to read (Reviewers aJDG, xwEQ, mfmv, 3Q28) and did not find any major weaknesses (Reviewer xwEQ). Overall, we are delighted to receive such an encouraging response from the Reviewers regarding the aforementioned aspects of our research.
We have noted a recurrent comment from the Reviewers, suggesting an extension of our results using transformer models and/or text data. In response to this feedback, we have conducted an additional evaluation of the pruning criteria on the fine-tuning task of ROBERTa Large (Liu et al., 2019) with approximately 354 million parameters, utilizing the CommonsenseQA dataset (Talmor et al., 2018) adapted from the Sparsity May Cry (SMC) Benchmark (Liu et al., 2023). The corresponding results are presented in Table R1 within the rebuttal PDF.
We have used the exact same setup as in the SMC-Benchmark, by first performing magnitude pruning for the sparse initialization and then using DST during the fine-tuning phase. We also use the same hyperparameters. Please note that this is a different configuration from the one used in our paper, in which we trained all the models from scratch (in spirit, the CommonsenseQA study is more similar to the EfficientNet fine-tuning experiment from Appendix G). Similar to the findings of Liu et al. (2023), we observed that in this scenario, the achievable sparsity without a significant performance decline is notably lower than in vision or tabular data scenarios. Concerning random growth, the MEST criterion appears to be the most suitable choice. For gradient growth with a density of 0.8, the RSensitivity and SET criteria initially demonstrate strong performance but are eventually surpassed by the magnitude and MEST criteria at a density of 0.7. Additionally, we noted a significantly higher variance in the outcomes across all criteria compared to vision tasks. Furthermore, we conducted a brief exploration of update period values for this problem (refer to Figure R2 in the Rebuttal PDF). Notably, we observed that overly frequent updates do not yield beneficial results. Concerning gradient growth, the most effective update period value is consistently around Δt = 500 for nearly all criteria. For random growth higher update period values generally lead to improved outcomes.
We will include these results in the paper. However, we would like to emphasize that our work already contains nine different model-dataset pairs, with model sizes varying from ~72K to ~25M parameters. We believe that our more than 7000 experiments are already a strong basis that allows us to make reasonable conclusions about the impact of the pruning criteria in DST. At the same time please note that the studied pruning criteria had been evaluated either on tabular or vision datasets in most of the works that have introduced them or used them (e.g., Mocanu et al. 2018, Yuan et al. 2021, Naji et al. 2021, Evci et al. 2020). Large language models typically have a different structure and are based on the attention mechanism, for which the applicability of sparse training is still an open area of research (Liu et al. 2023, Sun et al. 2023). This motivated our initial choice of fixing our focus on tabular and vision models.
For detailed responses to the individual concerns raised by each reviewer, please refer to our separate comments posted in response to their reviews. We have made a significant effort to address each issue and query raised by the reviewers and have included results for every suggestion made to improve our empirical evidence.
References:
Liu, Shiwei, et al. "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!." arXiv:2303.02141, ICLR (2023).
Sun, Mingjie, et al. "A Simple and Effective Pruning Approach for Large Language Models." arXiv preprint arXiv:2306.11695 (2023).
Liu, Yinhan, et al. "Roberta: A robustly optimized bert pretraining approach." arXiv preprint arXiv:1907.11692 (2019).
Talmor, Alon, et al. "Commonsenseqa: A question answering challenge targeting commonsense knowledge." arXiv preprint arXiv:1811.00937 (2018).
Mocanu et al. "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science." Nature communications 9.1 (2018): 2383.
Evci, Utku, et al. "Rigging the lottery: Making all tickets winners." International Conference on Machine Learning. PMLR, 2020.
Yuan, Geng, et al. "Mest: Accurate and fast memory-economic sparse training framework on the edge." Advances in Neural Information Processing Systems 34 (2021): 20838-20850.
Naji, Seyed Majid, Azra Abtahi, and Farokh Marvasti. "Efficient Sparse Artificial Neural Networks." arXiv preprint arXiv:2103.07674 (2021).
Pdf: /pdf/8c68be3d0d78bb3deda01f4393a97cfdf98e8f51.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper provides a comparison of several dynamic sparse training methods and conclude that they are perform similar unless in the ultra-sparse regime, in which case the magnitude-based pruning performs best.
Strengths: The comparison is through and useful for new researchers in this field.
Weaknesses: This paper only provides empirical experimental results for known dynamic sparse training. No novelty is the main issue for this paper. I think it is more proper for it to be publish in a report than in a conference proceeding.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: N/A
Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Reviewer Q7vN**
We thank the Reviewer for the feedback. We are happy to hear that the Reviewer finds our experiments thorough and sees the usefulness of our research. If we understand correctly, the Reviewer’s only concern is in the novelty of the paper.
We kindly request the Reviewer to acknowledge that scientific research can encompass novelty in various forms. We strongly believe that thoroughly exploring existing solutions holds equal significance to the pursuit of new methods or architectures. Our paper presents a robust and comprehensive study that systematizes the current knowledge in the field while challenging common assumptions made in dynamic sparse training (DST), such as the perceived need for a high number of updates or the superiority of more elaborate pruning criteria. Our study establishes a strong baseline for future research endeavors. We wish to highlight that, prior to our work, there has been no comprehensive study of different pruning criteria in DST. Previous studies either examined them in isolation or intertwined them with other design choices like the growth criterion. We are the first to investigate the impact of pruning criteria on DST performance and the selection of hyperparameters. Additionally, as the Reviewer has also noted, our work provides valuable insights that can greatly benefit researchers in the field. The originality of our paper and the significance of its findings have been praised by Reviewers xwEQ and mfmv. Therefore, we sincerely ask the Reviewer to consider our work from a broader perspective of novelty and reconsider the possibility of adjusting the score accordingly. Your thoughtful consideration of these aspects will be greatly appreciated. | null | null | null | null | null | null |
Iteratively Learn Diverse Strategies with State Distance Information | Accept (poster) | Summary: Reinforcement Learning (RL) algorithms commonly learn a distinct policy that is responsible for a distinct behavior. Learning different, diverse behaviors generally is a difficult task in RL. This paper proposes a new algorithm State-based Intrinsic-reward Policy Optimization (SIPO) that can learn diverse, human interpretable behaviors.
The paper first discusses some of the current diversity measures, and based on the insight, it proposes the new ITR-based method, even though PBT is the choice when it comes to learning diverse behaviors. The method is efficient and can learn diverse behaviors. The effectiveness is demonstrated in simulated environments.
Strengths: - well written. Motivation and contributions made clear
- the paper covers a good amount of related work, even though there are a few missing, in my opinion (see Weaknesses).
- Nice evaluations w.r.t. diversity
Weaknesses: - The big field of unsupervised reinforcement learning also covers a lot of similarity/dissimilarity measures for formulating an intrinsic reward. Even though some of the works are mentioned (e.g. DIAYN), there are several other works that should be mentioned as they are using different objectives and hence differently measure the similarity/dissimilarity:
- https://arxiv.org/pdf/1906.05274.pdf
- http://proceedings.mlr.press/v139/yarats21a/yarats21a.pdf
During a Google search, I discovered another work that also seems to have an interesting objective to learn different solutions:
- https://proceedings.mlr.press/v164/celik22a/celik22a.pdf
- weak motivation for the necessity of the state distance (see Questions)
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: - The motivation behind Figure 2 is unclear to me even when reading the text. I think this figure definitely needs more description and explanation. There are also no numbers indicating what the measured distances are.
- Different works have approximated the state entropy for discovering new skills. E.g. the APT [1]. It would be interesting to see how the used diversity measures compare against it. Can the authors assess, how different the solutions would be when using the approximated state entropy compared to the proposed metrics?
- The humanoid locomotion task in Section 6.1 is used to compare the diversity of learned policies. How do the methods compare w.r.t. performance?
[1]: https://openreview.net/forum?id=fIn4wLS2XzU
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The paper explicitly addresses limitations but doesn't state anything about potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: + there are several other works that should be mentioned as they are using different objectives and hence differently measure the similarity/dissimilarity
We appreciate the reviewer's insightful suggestion and ensure that we will incorporate these relevant works into our paper. The reviewer’s additional reference[1] operates in a parallel setting where a mixture of policies is learned within task-aware contextual MDPs, with each policy addressing a subspace of tasks. In contrast, our focus is on the discovery of distinct individual policies to tackle the same task.
[1] Celik, O., Zhou, D., Li, G., Becker, P., & Neumann, G. (2022, January). Specializing versatile skill libraries using local mixture of experts. In Conference on Robot Learning (pp. 1423-1433). PMLR.
+ The motivation behind Figure 2 is unclear to me even when reading the text. I think this figure definitely needs more description and explanation. There are also no numbers indicating what the measured distances are.
As an example, in the soccer attack, the goalkeeper could misleadingly amplify the action-based diversity score by outputting random actions (e.g. sliding in the backyard). This example underscores a notable issue. If action-based measures are leveraged for optimizing diversity, the resultant policies can produce visually similar behavior.
+ It would be interesting to see how the used diversity measures compare against APT.
We appreciate the suggestion. However, there is a fundamental distinction in formulation. APT optimizes state entropy within a *single policy*, whereas our method, SIPO, targets the joint entropy of *a population of policies*. It is okay for each single policy within the population to have low state entropy.
To employ APT's objective, training a population of agents concurrently is required. The algorithm should optimize the estimated entropy over states visited by all policies. Yet, this approach mandates large-scale k-NN computation (k=12) over substantial batches, leading to significant computational inefficiency. Despite our dedicated efforts, we didn’t finish a single training trial of APT within 48 hours (in contrast to other PBT baselines, e.g. DvD, which completes training in less than 8 hours). We primarily employ the k-NN state entropy metric to quantify diversity in the SMAC domain, as we show in the paper.
+ The humanoid locomotion task in Section 6.1 is used to compare the diversity of learned policies. How do the methods compare w.r.t. performance?
We present the diversity score and corresponding average rewards achieved by all algorithms below. These numerical values are averaged across the complete population for a clear comparison:
| |SIPO-RBF | SIPO-RBF | DIPG | RSPO | SMERL | DvD | PPO |
| --- | --- | --- | --- | --- | --- | --- | --- |
| humanoid reward |3508 | 3763 | 5191 | 1455 |4253 | 4498 | 5299 |
| humanoid diversity | 0.53 | 0.71 | 0.12 | 0.53 | 0.01 | 0.40 | - |
The tabulated data above highlights the varying trade-offs between task performance and diversity exhibited by different algorithms. It is noteworthy that SIPO, in particular, displays an adeptness at training a notably more diverse population while upholding a reasonably moderate level of task performance.
---
Rebuttal Comment 1.1:
Title: Answer to the authors
Comment: I am sorry for my late reply. I am thankful for the author's detailed explanations. I don't have further questions and would like to wait for the discussion period for further action. However, the author's response helps me to recommend a 'weak accept'. | Summary: This paper addresses the problem of learning diverse policies for a given RL task. First, it studies pros and cons of various formulations in two different dimensions: (i) How to measure diversity, for which it considers measures based on action or state distribution and state distances, (ii) how to compute the diverse policies, either with a joint method or an iterative one. Then, it combines the findings into a method, called SIPO, which implements iterative learning through gradient descent ascent with state distance incentives. Finally, it empirically evaluates SIPO on a variety of domains, including locomotion tasks, Star-Craft, and Google Research Football.
Strengths: - Relevance of the problem. Several recent works have addressed the problem of learning diverse policies in RL, in both supervised and unsupervised settings;
- Methodology. Interesting and reasonably efficient methodology that is based on a new state-metric perspective on the problem of learning diverse policies;
- Experimental results. Promising empirical results on challenging domains.
Weaknesses: - Motivation. The introduced objective lacks a strong theoretical ground that is mostly present in previous works with alternative diversity objectives;
- Related works. The comparison between SIPO and some relevant related works, especially Zahavy et al., 2022, is not really upfront;
- Experiments robustness and significance. The experiments consider arbitrary performance metric and average results over 5 or 3 seeds, which is hardly enough to get statistical significance.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: I have one crucial concern about this paper, which is otherwise sound and interesting. The superiority of diversity measures based on state distances is only informally motivated, through examples rather than theoretical justification. Instead, the diversity measures based on action distribution or state occupancy have been both linked to important theoretical properties, such as reward robustness for the former (e.g., Husain et al., Regularized policies are reward robust, 2021) or bounded sub-optimality for the latter (e.g., Kakade and Langford, Approximately optimal approximate reinforcement learning, 2022). Diversity based on state distances currently lacks similar theoretical ground, which also leaves an existential question on the motivation of the paper? Why we want to learn diverse strategies in the first place? Is it to be robust to reward perturbations? Is to learn reusable skills? Is it something else?
The concern reported above somewhat propagates to the experimental evaluation. If the ultimate purpose of the training is not clear, how can we evaluate one method against another? Indeed, the experiments are based on arbitrary diversity scores. SIPO may be better than baselines on those, but it is hard to reach some conclusion beyond qualitative justifications. Perhaps a good option would be to evaluate SIPO also on more consequential benchmarks, such as testing the reward robustness of the trained set of policies, or showing that the latter is a good starting point for fine-tuning to a different task.
For this main reason, I am currently providing a borderline evaluation, while I am open to change my score if the authors can provide more formal justification for their objective function in the rebuttal.
I provide below additional comments on the paper.
**Objective function**
One key aspect that Section 4.1 seems to overlook is that all of the provided motivations (especially in lines 158-169) make sense only when the environment's states lie on a metric space. In several domains, such as tabular MDPs with symbolic state representations or vision-based tasks, defining a proper metric over the states might be even more challenging than learning diverse policies alone. I think this limitation should be explicitly reported everywhere in the paper, including the introduction and abstract.
The motivation reported in lines 170-173 appears to be somewhat weak too, at least in domains where external rewards are present, and the concept of idle actions may be incorporated in the rewards.
Can the authors clarify the role of the cost function $g$ into the diversity score reported in Eq. 5?
**Experiments**
Why are the authors reporting experiments in multi-agent domains, even if they present a method for learning diverse policies in a single-agent setting? This choice looks somewhat odd, and it shall be motivated.
I am worried about the statistical significance of the reported performance results, since most of them are obtained as average over 5 or 3 seeds. While I can understand that running experiments in such complex domains is costly, the standard to claim experimental robustness has increased lately. Moreover, the results report average and standard deviation. On the one hand, confidence intervals would be more meaningful than standard deviation, on the other hand only it is not enough to have best average performance to say that one method outperforms another when the intervals are overlapping.
**Related Works**
This work looks in a way orthogonal to Zahavy et al. (2022), in which the authors present a method to learn diverse near-optimal policies by maximizing diversity with an hard reward constraint. While their concept of diversity is different, it would be interesting to discuss pros and cons of the two alternative solutions in depth, and perhaps compare SIPO with Domino in the experimental campaign.
Another work that is somewhat relevant to the topic is (Mutti et al., Reward-free policy space compression for reinforcement learning). Their aim is to learn a diverse set of policies from which any reward function can be approximately optimized, and they also present a gradient descent ascent procedure for this purpose. The fact that their work does not account for rewards is an important difference, but the authors can perhaps consider discussing this work in the paper as well.
One key benefit that I see in the SIPO solution w.r.t. prior works, is that SIPO shall enjoy favorable computational complexity, whereas both Zahavy et al. and Mutti et al. need to solve a non-convex non-concave optimization problem.
**Minor**
- The preliminaries are framing the problem as POMDP. I do not understand why partial observability is introduced, and then never used in the paper at all.
- Figure 4, IL is instead ITR?
- Theorem 4 says that ITR can achieve the same rewards as PBT, but only with an inferior diversity. This trade-off between computation and diversity could be better highlighted.
- One of the evaluation metric is the entropy estimator via k-NN, for which (Liu & Abbeel, 2021) is mentioned. Note that those entropy estimators have been presented before (e.g., Singh et al., Nearest neighbor estimates of entropy, 2001). Moreover, if the goal is to maximize the state entropy, why not using APT instead of SIPO?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: The limitations are reported in the final sentences of the paper. Perhaps, some limiting aspects could be discussed in additional length.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: + motivation
Our primary aim is to achieve diversity itself. Agents with similar reward functions can manifest significantly diverse behaviors (e.g., high-reward unexpected behavior[1]). This property is different from standard DL where different local optima (almost) suggest the same results (i.e., proxy to the global optimum[2]).
This prompts the importance of studying *the learned behavior* and finding *all unique solutions for the same task* in RL beyond only the rewards. We believe it is a **fundamental research problem in RL**, which, however, very few researchers have been fully aware of. A diverse collection of policies can further enhance human-AI collaboration or help design robust robots, depending on the applicaion (line27-line30).
Despite our best efforts to engage in theoretical analysis, we note that a consensus of the best algorithmic formulation of "distinct solutions" in RL remains elusive.
[1] https://openai.com/blog/ 409 faulty-reward-functions/.
[2] Ma, T. (2020). Why Do Local Methods Solve Nonconvex Problems?.
+ clarification of the evaluation metric
We endeavored to select the best available task-specific metric to authentically reflect behavioral diversity. For humanoid, we adopt a metric embraced by the Quality-Diversity community. However, for SMAC and GRF, existing metrics can fail (e.g. Table 8 in [1] and our Appendix B.2.3). Therefore, it is necessary to engineer task-specific metrics relying on both visualization-based insights and metrics in previous works.
We enthusiastically encourage the reviewer to review all replay files utilized for tallying policy counts on our project website: https://mega.nz/folder/xP0D2CRa#YRL-PVjjsyZhGZ2QUZqH2g.
[1] https://arxiv.org/abs/2204.02246
+ tabular MDPs/vision-based tasks
We explicitly assume the access to object-centric information and features in our paper. We emphasize that this assumption typically holds in MARL benchmarks and can be further addressed by incorporating feature learning algorithms, e.g. [1] and [2]. We discussed this in Appendix F.2 and are committed to reinforcing this.
We note that SIPO-WD has already addressed this assumption through the incorporation of a learnable Wasserstein discriminator. The discriminator can effectively process images or one-hot state embeddings as inputs, as detailed in Appendix B.3.
[1] https://arxiv.org/abs/1810.04586
[2] https://arxiv.org/abs/2102.11271
+ The motivation in lines 170-173 appears weak
As an example, in the soccer attack, the goalkeeper could misleadingly amplify the action-based diversity score by outputting random actions (e.g. sliding in the backyard). This example underscores a notable issue. If action-based measures are leveraged for optimizing diversity, the resultant policies can produce visually similar behavior.
While it can be possible to exclude idle actions by modifying task rewards, it requires a large number of hacks and engineering efforts. The issue of idle actions exists even in popular MARL benchmarks like GRF. We propose a systematic approach to bypass idle actions by directly considering state distances.
+ the role of the cost function g?
The cost function g is a notation providing a generalized and unified definition. It also contributes to training stability by scaling the raw distance. g in Eq.5 can be realized by either RBF kernel or Wasserstein distance.
+ multi-agent experiments and POMDP
We wish to validate that SIPO is general enough to be applied to many difficult and complicated scenarios, such as testbeds like GRF and SMAC (SMAC has partial observability). These environments encompass a notably more diverse range of potential winning strategies, and existing methods can fail. Therefore, they offer an apt platform for assessing SIPO's capacity.
(Dec-)POMDP is a standard formulation in many MARL works. We follow this common definition since it provides the fewest algorithmic assumptions.
+ Comparison with Domino
While we have discussed in Sec 2, we are delighted to present an in-depth discussion below.
**Formulation** Domino pursues hard constraints on rewards while optimizing diversity, potentially hindering policies with disparate reward scales. Therefore, Domino tends to discover similar policies but all with the same reward. SIPO, on the other hand, allows much diverse locally optimal policies.
**Diversity Measure** Domino employs the distance between successor features, while SIPO directly utilizes the distance between raw states. Domino is complementary to our worker because SIPO can also incorporate successor features as the state representation.
We have meticulously re-implemented Domino within our codebase according to Domino’s appendix. We execute the algorithm in the Humanoid locomotion task, employing the robot state (excluding torques) for successor feature computation. Despite our earnest efforts to optimize Domino's performance, our findings reveal its comparable performance to SMERL, illustrated by a minimal diversity score of 0.01.
+ APT metric and its usage in experiments
We acknowledge the missing reference and will duly append it to our paper.
Regarding the utilization of APT, we appreciate the suggestion. However, there is a fundamental distinction in formulation. APT optimizes state entropy within a *single policy*, whereas our method, SIPO, targets the joint entropy of *a population of policies*. It is okay for each single policy within the population to have low state entropy.
To employ APT's objective, training a population of agents concurrently is required. The algorithm should optimize the estimated entropy over states visited by all policies. Yet, this approach mandates large-scale k-NN computation (k=12) over substantial batches, leading to significant computational inefficiency. Despite our dedicated efforts, we didn’t finish a single training trial of APT within 48 hours (in contrast to other PBT baselines, e.g. DvD, which completes training in less than 8 hours).
---
Rebuttal Comment 1.1:
Title: After response
Comment: I want to thank the authors for their detailed replies.
Unfortunately, I am still doubtful on both the strength of the motivation and the choice of evaluation metrics. I concede that those are matter of opinion rather than formal/technical weaknesses. I see little chance of solving them through back-and-forth discussion. Instead, I will hear other opinions in the private discussion before making a final evaluation.
As a minor note, I think that proposing SIPO as a way to "find all unique solutions for the same task in RL" is somehow overstated. This looks closer to the objective of Domino, in which one maximizes the diversity of a set of nearly optimal policies. Instead, SIPO does not control the sub-optimality, as it has an hard constraint on diversity. This means that some of the policies provided by SIPO are not solutions to the task, even in approximate sense.
---
Reply to Comment 1.1.1:
Title: Additional Author Response
Comment: We appreciate the reviewer's active engagement. We believe that your insights and feedback are immensely valuable to us. Thanks a lot.
We also have a brief reply to the minor comment regarding the difference between Domino and SIPO. We focus more on the interesting emergent behaviors than the reward score. We want to emphasize that **in the multi-agent setting**, policies with lower scores **can still be** solutions to the problem. It can be proved that each local optima in cooperative multi-agent games is a global nash equilibrium[2], which is a meaningful solution to the problem. Domino has its own flaw that it can easily ignore a significant collection of solutions (e.g. in multi-agent trust dilemmas[1]).
[1] Peysakhovich, A., & Lerer, A. (2017). Prosocial learning agents solve generalized stag hunts better than selfish ones. AAMAS 2018.
[2] Emmons, S., Oesterheld, C., Critch, A., Conitzer, V., & Russell, S. (2022, June). For learning in symmetric teams, local optima are global nash equilibria. In International Conference on Machine Learning (pp. 5924-5943). PMLR. | Summary: The paper discusses the challenge of optimizing rewards while discovering diverse strategies in complex reinforcement learning problems. This paper examines two design choices for tackling this challenge: diversity measure and computation framework. By incorporating state-space distance information into the diversity measure, the behavioral differences between policies are accurately captured. Besides, two common computation frameworks: population-based training (PBT) and iterative learning (ITR) are compared, and it shows that ITR can achieve comparable diversity scores with higher computation efficiency. Based on above analysis, a novel diversity-driven RL algorithm named State-based Intrinsic-reward Policy Optimization (SIPO) is proposed. The authors evaluate SIPO across three environments and demonstrate that it consistently produces diverse policies that cannot be discovered by existing baselines.
Strengths: This paper introduces a state-based population diversity measurement and transfers it into a shaping reward in practice for computational conveniency. By comparison with PBT and ITR, it is concluded that ITR is able to achieve higher performance. Additionally, experiment environments including single-agent reinforcement learning and multi-agent reinforcement learning all shows the effectiveness of the proposed method. In a way, this paper is well-written and presents some contributions to the field of reinforcement learning. The authors provide clear explanations of their methodology and experiment results, making the paper easy to track.
Weaknesses: In this paper, the proposed state-based diversity measurement is one of key innovations. By combining with ITR framework, it achieves satisfactory performance. However, there still exist some weaknesses remained to be polished up.
(1) The comprehensiveness of literature reviews remains to be improved.
(2) The motivating example in Sec4.1 shows the limitation of previous action-based diversity measurement. However, the reason why state-distance based measurement can overcome the problem is not explained adequately.
(3) The experiment part seems to be insufficient. Some SOTA diversity enhancement methods in RL and MARL are not analyzed or summarized. Besides, the assessment criteria of diversity seems unfair.
(4) It claims that the heatmaps of agent positions in SMAC show obvious strategic differences. However, all four policies seem to explore similar areas in the map.
(5) While the main body of the paper is well-written, there is space for improvement. I defer some of my issues in the appendix to "Questions".
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: Q1: Would you improve the literature reviews from the aspect of adding some well-known or SOTA related works (e.g. [1] [2] [3] [4] [5])?
Q2: Why can state-distance based measurement overcome the shortcomings of action-based measurements? It is suggested to make more discussion theoretically.
Q3: Can SIPO be extended to algorithms in two-player zero-sum game scenario, like self-play, PSRO?
Q4: What will be like if combining state-based diversity measurement and action-based diversity measurement? It is suggested to add an study in experiment part.
Q5: Since the number of population size is fixed in PBT or ITR, the determination of it is still an open question as we known. Would you add a theoretical or empirical analysis on it?
Q6: Comparison criteria of diversity seems unfair to baselines in Humanoid Locomotion and SMAC, since all baselines’ optimization objective are action-based. An intuitive evaluation criteria is the quantifiable behavioral difference between agents. Could you make more analysis on it?
Q7: As can be seen, the heatmaps in Fig7 seems to show that the positions explored by the four agents are not significantly different. Would you add additional explanations on it?
Q8: Minor: abbreviation in Fig4 (i.e., “IL”) is not been explained.
[1] Liu, Z., Yu, C., Yang, Y., Wu, Z., & Li, Y. (2022). A Unified Diversity Measure for Multiagent Reinforcement Learning. Advances in Neural Information Processing Systems, 35, 10339-10352.
[2] Balduzzi, D., Garnelo, M., Bachrach, Y., Czarnecki, W., Perolat, J., Jaderberg, M., & Graepel, T. (2019, May). Open-ended learning in symmetric zero-sum games. In International Conference on Machine Learning (pp. 434-443). PMLR.
[3] Hu, S., Xie, C., Liang, X., & Chang, X. (2022, June). Policy diagnosis via measuring role diversity in cooperative multi-agent rl. In International Conference on Machine Learning (pp. 9041-9071). PMLR.
[4] Masood, M. A., & Doshi-Velez, F. (2019). Diversity-inducing policy gradient: Using maximum mean discrepancy to find a set of diverse policies. arXiv preprint arXiv:1906.00088.
[5] Li, C., Wang, T., Wu, C., Zhao, Q., Yang, J., & Zhang, C. (2021). Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34, 3991-4002.
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 2 fair
Limitations: The authors have concluded the limitation of the methodology from the aspect of state representation and acceleration problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: + Some SOTA diversity enhancement methods in RL and MARL are not analyzed or summarized.
We sincerely thank the reviewer for providing additional relevant literature, but we believe that our paper does well in contrasting against previous methods for developing diverse policies in RL.
[1] and [2] study diversity in competitive multi-agent games for minimizing exploitability. This objective does not apply to our settings, i.e., single-agent MDPs and cooperative multi-agent games.
[3] and [5] explore cooperative multi-agent games, gauging diversity **among agents within each game**. Their final objectives are optimizing rewards. In contrast, our approach focuses on diversity across **a collection of distinct joint policies**. The final reward is the secondary objective in our setting. [5] has been discussed in line 89 of our paper. We commit to incorporating [3] in our forthcoming revision.
We acknowledge the significance of [4] (i.e., DIPG) in relation to our paper. This work is extensively discussed in our related work section and comprehensively juxtaposed in our experimental analyses (cited as [39] in our paper).
+ Why can state-distance based measurement overcome the shortcomings? Theoretical discussion?
Action-based measures may fail when visually similar states are reached through very different action sequences. We provide particular **counterexamples** of action-based measures in Sec 4.1, suggesting fundamental flaws of existing methods.
State-distance based measures circumvent this issue by explicitly comparing states, which directly addresses the counter example. Based on this finding, we further empirically validate our proposed methods on many challenging domains with empirical results in Sec 6. How to best quantify diversity remains an open question. A widely accepted theoretical framework has not yet converged in the community. For this reason, we choose to analyze the algorithmic issues using specific examples and an analysis on algorithm convergence.
+ Can SIPO be extended to algorithms in self-play, PSRO?
SIPO's design inherently enables the generation of diverse policies for conquering static opponents, as illustrated by SMAC and GRF experiments. We acknowledge that the integration of SIPO (low-level policy solver for best responses) with PSRO (high-level policy mixture) holds great potential. We regard this as a promising avenue for future exploration.
+ What will be like if combining state-based diversity measurement and action-based diversity measurement?
The reviewer’s suggestion is greatly appreciated. In response, we have carried out additional experiments within the GRF domain to explore this approach. We introduce action information by directly concatenating the global state, used for diversity calculation, with the one-hot encoded actions of all agents. The following table presents the outcomes, indicating the number of policies obtained:
| | 3v1 | CA | corner |
| --- | --- | ---| ---|
| SIPO-RBF | 3.0 (0.8) | 3.3 (0.5) | 2.7 (0.5) |
| SIPO-RBF + action | 3.0 (0.0) | 2.3 (0.5) | 1.0 (0.0) |
For scenarios with a limited number of agents, the action-augmented variant demonstrates comparable performance. However, when the agent count increases (as evident in the 11-agent cases of CA and corner), the incorporation of actions can introduce misleading diversity, detracting from the authenticity of the outcomes. We will append these results in our final version.
+ the determination of the population size
A larger population usually leads to a better performance on the desired application[1]. SIPO/ITR enables open-ended training for diversity discovery. With unlimited time and computation, training can proceed until no additional distinct strategies can be discovered. In our paper, we run with the maximal population size given available resources.
[1] Tang, Z. et al. (2021). Discovering diverse multi-agent strategic behavior via reward randomization. ICLR.
+ Comparison criteria of diversity seems unfair to baselines in Humanoid Locomotion and SMAC
The adopted criterion (i.e., torque distance, k-NN state entropy) is a commonly used evaluation metric[2,4]. Prior works (i.e., our baselines) develop diversity measures for their own purposes, such as encouraging exploration and deriving robust policies, which may fail to learn meaningful diverse policies. Our measures are designed to directly tackle this diversity problem based on the analysis of counter-examples and empirical evidence. Adopting a state-based criterion does not mean that SIPO will benefit from it and lead to an unfair evaluation since no methods explicitly optimize this evaluation criterion.
Prior works[3] have also attempted to use an action-based metric, i.e., DvD score. However, [3] reported that this criteria can easily achieve the maximum value of 1.0 and does not show distinction among different algorithms.
[2] Shuang Wu, Jian Yao, Haobo Fu, Ye Tian, Chao Qian, Yaodong Yang, QIANG FU, and Yang Wei. Quality-similar diversity via population based reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023.
[3] Zhou, Z., Fu, W., Zhang, B., & Wu, Y. (2022). Continuously discovering novel strategies via reward-switching policy optimization. arXiv preprint arXiv:2204.02246.
[4] Singh et al., Nearest neighbor estimates of entropy, 2001
+ As can be seen, the heatmaps in Fig7 seems to show that the positions explored by the four agents are not significantly different. Would you add additional explanations on it?
To facilitate a comprehensive comparison, we have included policies generated by DIPG for the reviewer's consideration in the attached pdf. Compared with DIPG, we contend that the heatmap in our paper exhibits greater behavioral diversity. Moreover, we strongly encourage the reviewer to peruse the GIF demonstrations on our [project website](https://sites.google.com/view/diversity-sipo) for an enhanced qualitative assessment.
---
Rebuttal Comment 1.1:
Comment: First of all, thank you to the author for response to the questions which I raised. Some concerns have been answered adequately. According to the author's response, I agree to increase the score of this article by 2 point to "weak accept". | Summary: This work proposes a solution to the problem of finding diverse policies for complex (multi-agent) reinforcement learning (RL) environments. The paper is presented as a joint study on diversity metrics and learning frameworks. For the former, the authors show the limitations of common diversity metrics like action-distribution diversity and state-occupancy based metrics and argue for metrics that incorporate state-distances for training mutually-distinct policies instead. For the latter, population-based (PBT) and iterative training (ITR) approaches are considered, where PBT is presented as a more formally suitable framework, that in reality is limited by its pairwise diversity constraints. ITR as a reasonable relaxation of PBT, on the other hand, is found to be effective in combination with the author’s proposed (’meaningful’) state-distance diversity. The combination if ITR and the proposed metric is incorporated into the State-Based Intrinsic-Reward Policy Optimization (SIPO) algorithm and evaluated on the single-agent human locomotion domain, as well as the multi-agent SMAC and GRF environments. SIPO is shown to outperform related baselines and is presented to be more capable of producing visually distinct / humanly intuitive policies compared to its counterparts.
Strengths: - The paper is well researched in related work. While the evaluated frameworks PBT and ITR or the concept of state-distance in itself are not novel (but well cited and explained), the combination of ITR with the proposed state-dissimilarity diversity -- realized either as RBF-Kernel or with Wasserstein-Distance — are novel as far as I can tell.
- Both the study on the limitations and the evaluation are presented in-depth and read quite intuitive. I am not too familiar with formal proofs of convergence criteria but the technical aspects in the paper itself do appear to be sound. The evaluation is reasonably complex, covering both complex single-agent and two multi-agent domains against reasonable baseline models.
- The paper is very nicely written and well organized. The logical structure of motivation, related work, background, analytic study of the frameworks in combination with the metric and the presentation of the SIPO algorithm and its evaluation is easy to follow and intuitively explained. Small examples to illustrate the argumentation help lighten an otherwise densely formulated reasoning. I also find the visualizations to be nicely done.
- The author’s SIPO approach seems to produce some very diverse strategies, which also appear intuitively interpretable. While diversity in itself is very important for exploration and an important Problem to solve for RL in general, I find the convergence towards interpretable strategies to be very intriguing. The paper is not only nicely understandable and informative on the matter of diversity related approaches, the evaluation is rather strong and the supplementary material has enough content for another paper by itself. I have no real complaints about this work, very nicely done.
Weaknesses: - Besides providing a reasonably complex evaluation, there could have been more than 5 seeds used for the evaluation.
- I would also like to see at least some classic total-reward metrics / comparisons for these domains, as the metrics chosen here to highlight the performances of the SIPO algorithm (pairwise distances, est. state entropy, agent position distributions, nr. distinct strategies) seem a bit cherry-picked to argue in favor of diversity only. While there are some relevant plots in the Appendix (and some of the toy-examples reason with rewards), I would encourage to showcase at last one such classic performance comparison against the baselines in the main-paper itself.
Technical Quality: 4 excellent
Clarity: 4 excellent
Questions for Authors: - In 4.2 it is mentioned that PBT only converges when the exploration is faint. Is this claim shown somewhere or is this based on empirical evidence?
- Could you provide some motivation on why you are casting $D_s(\pi_i, \pi_j^*)$ as intrinsic reward, and why RBF Kernel and the Wasserstein Distance were chosen for the diversity? Why does L_2 WSD provide ‘stronger discriminative power’(l263)?
- Increasingly more MARL approaches start to require a DEC-POMDP for independent / localized actions without a fully observable state. How would you judge the SIPO’s transferability to such a formalization?
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 4 excellent
Contribution: 3 good
Limitations: There are two larger limitations mentioned, for once the assumption of continuity for convergence and the access to an object-centric state-representation. Both are openly disclosed and discussed in the paper. ‘The acceleration of ITR remains an open challenge’ is also a valid outlook, that should be addressed in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: + I would also like to see at least some classic total-reward metrics / comparisons
Appendix B.5 elaborates on the detailed returns accomplished by SIPO. We present the diversity score and average rewards achieved by all algorithms below. These numerical values are averaged across the entire population for a clear comparison:
| humanoid |SIPO-RBF | SIPO-WD | DIPG | RSPO | SMERL | DvD | PPO |
| ---- | ---- |---- |---- |---- |---- |---- |---- |
| reward | 3508 | 3763 | 5191 | 1455 |4253 | 4498 | 5299 |
| diversity | 0.53 | 0.71 | 0.12 | 0.53 | 0.01 | 0.40 | - |
| SMAC 2m1z | | | | | | | |
| win rate % | 100 | 100 | 100 | 100 | 100 |100 | 100 |
| diversity (1e-3) | 38 | 36 | 32 | 32 | 28 | 30 | - |
| SMAC 2c64zg | | | | | | | |
| win rate % | 99 | 93 | 99 | 85 | 100 |100 | 100 |
| diversity (1e-3) | 72 | 56 | 70 | 56 | 42 | 57 | - |
| GRF 3v1 (first 4) | | | | | | | |
| win rate % | 93 | 82 | 93 | 94 | 91 | 83 | 92 |
| diversity | 3.0 | 3.0 | 2.7 | 2.0 | 1.3 | 3.0 | 2.7 |
| GRF CA | | | | | | | |
| win rate % | 70 | 41 | 46 | 76 | 45 | - | 50 |
| diversity | 3.3 | 3.0 | 2.3 | 2.0 | 1.3 | - | 1.7 |
| GRF Corner | | | | | | | |
| win rate % | 72 | 56 | 75 | 23 | 67 | - | 71 |
| diversity | 2.7 | 3.0 | 1.7 | 1.6 | 1.0 | - | 2.0 |
The tabulated data above highlights the varying trade-offs between task performance and diversity exhibited by different algorithms. It is noteworthy that SIPO, in particular, displays an adeptness at training a notably more diverse population while upholding a reasonably moderate level of task performance.
+ In 4.2 it is mentioned that PBT only converges when the exploration is faint. Is this claim shown somewhere or is this based on empirical evidence?
This conclusion can be drawn from empirical evidence presented in Table 2. While PBT may occasionally succeed in learning diverse policies, this outcome is contingent upon initializations that prompt the agent to explore all landmarks. Nevertheless, such instances of success are outweighed by cases of failure, leading to notable variance in PBT outcomes.
+ why you are casting the diversity measure as intrinsic reward
The objective in Equation (7) is not differentiable w.r.t. \pi. This is because it depends on the states traversed by \pi, rather than \pi’s output. Consequently, conventional gradient-based methods are unsuitable for its optimization. Nevertheless, we are able to represent it as the cumulative sum of intrinsic rewards, specifically the intrinsic return. This allows us to leverage policy gradient techniques for optimization.
+ why RBF Kernel and the Wasserstein Distance were chosen for the diversity? Why does L_2 WSD provide ‘stronger discriminative power’(l263)?
We first choose the RBF kernel because it is the simplest and the most widely adopted kernel function in the ML community. Regarding the choice of the Wasserstein distance, also referred to as the earth-moving distance, it holds a distinct advantage due to its interpretation within optimal transport theory[1,2]. Unlike distances that rely solely on specific summary statistics such as means, the Wasserstein distance can effectively quantify shifts in state distributions and remains robust in the presence of outliers[2].
[1] Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR.
[2] Villani, C. (2009). Optimal transport: old and new (Vol. 338, p. 23). Berlin: springer.
+ Increasingly more MARL approaches start to require a DEC-POMDP for independent / localized actions without a fully observable state. How would you judge the SIPO’s transferability to such a formalization?
Experimental outcomes in SMAC and GRF confirm the favorable performance of SIPO in Dec-POMDPs. Like most popular MARL algorithms, we assume accessible global states only during training, i.e., centralized training with decentralized execution (CTDE).
In the fully decentralized training setting, discovering diverse policies requires the fundamental improvement of existing MARL algorithms. While this is not the main focus of our paper, we are making efforts to address this problem in future works.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you to the authors for this honest reply. While Apx.B5 does in fact show some reward details, the averaged scores on this provided table much more clearly show that there is in fact quite a tradeoff between diversity and performance. I hope this will be communicated more upfront in the CR. However, I do personally agree with the authors on the importance of diversity and interpretability of RL-policies, even though performance is not quite on par. I am still quite in favor of publishing this work due to its excellent readability and the progress on human interpretable results. I would welcome it if it code would be open-sourced as an helpful tool for the community. | Rebuttal 1:
Rebuttal: We express our sincere gratitude to the reviewer for their meticulous examination and thoughtful feedback on our manuscript. We response to each reviewer's question in the corresponding channels. Please feel free to drop a message if you have additional concerns.
We acknowledge a typo in the figures of our paper. "IL" should be "ITR". We promise to fix this in our next revision.
The attached pdf contains the heatmap of DIPG in SMAC, in response to reviewer hytR.
Pdf: /pdf/23399df6f8bea9d08296a3f326abb894eac7885f.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Test-Time Amendment with a Coarse Classifier for Fine-Grained Classification | Accept (poster) | Summary: The paper considers the problem of fine-grained classification and proposes to use label hierarchy information at test time to improve the performance of the fine-grained classifier. The overarching goal is to improve the top-1 accuracy while at the same time reducing the severity of the mistakes (e.g., misclassifying species from the same kind or family is more acceptable than making a totally unrelated prediction).
The paper presents a post-hoc correction technique called Hierarchical Ensembles (HiE). The key idea is to train 2 classifiers - 1 fine-grained and 1 coarse-grained. At inference time, the predictions of the coarse-grained classifier are used to re-weight the probabilities of the fine-grained one. The authors perform experiments on 2 standard benchmark datasets: iNaturalist-19 and tieredImageNet-H and claim to achieve new state-of-the-art results on both of them. They also show the promise of their method in a semi supervised learning setup where the coarse grained labels are known for all samples in the dataset but the fine-grained (being more expensive to collect) only for few samples per class.
Strengths: * The paper is written well and is easy to read.
* The paper extends and builds up on prior works and seems to exceed their results.
* The Semi Supervised Learning setup is interesting.
* The conducted experiments and ablations are sensible.
Weaknesses: My main concern is with the methodology (Section 3). I understand the final decision rule (Eqs. (2) and (3)) but its derivation and motivation is confusing to me.
L118: "Assuming conditional independence between the estimated logits...": $i ⫫ i_{parent} \mid x$ means $P(i, i_{parent} \mid x) = P(i \mid x)P(i_{parent} \mid x)$. Thus, to the best of my knowledge, the denominator in Eq. (1) should not be there. Moreover, I believe that the conditional independence assumption is unrealistic for the considered setup. It does not feel right that the fine-grained prediction/probability is independent of its parent. This affects the derivations in the rest of the section.
Minor:
* The following related, in my opinion, citation is missing:\
Ridnik et al., ImageNet-21K Pretraining for the Masses, NeurIPS 2021 (Datasets and Benchmarks)\
They train ImageNet-21K classifiers using the WordNet hierarchy.
* For deep hierarchies, it may still be costly to collect labels from the penultimate hierarchy level. It would be useful if the authors include the set of labels from the last two levels for each of the datasets and refer to them from the main paper.
Technical Quality: 2 fair
Clarity: 3 good
Questions for Authors: Q1: Could you please clarify how you formally derive the intuition for the decision rule defined in Eqs. (2) and (3)?
Q2: Would it help if you use the logits from $\hat{y}_{L}$ and
$\hat{y}_{L-1}$ directly instead of the softmaxed probabilities in $Q$ and $R$?
Q3: To complete the discussion from L143-147, you can also consider the case when the fine-grained prediction is correct but the coarse-level prediction is wrong.
Q4: I am curious how often there is a mismatch between the fine-grained and the coarse classifier. And what is the distribution / the average of the depth of their LCA nodes.
Q5: L220-221: "... training a separate network at the coarse label allows explicit disentanglement of features at the coarse and fine-grained levels ...". I don't believe that the current set of experiments is sufficient to make such a strong claim.
Q6: Does the $\pm$ in the tables refer to standard deviation? How many runs were executed?
Q7: Sec. 4.4: Following the comments in the Weaknesses section above, it might be useful to formally clarify what is meant by "cascade the predictions top-down" (L279-280). Also, do you need to incorporate consecutive hierarchical levels (e.g., would it be possible and sufficient to take levels 5 and 7, omitting level 6)?
Q8: Sec 5: What do preliminary experiments suggest if you unify the architecture of the fine-grained and the coarse-grained classifier (and, e.g., only fine-tune separate classification heads for the two classifiers)?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 3 good
Contribution: 2 fair
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive and thoughtful comments. They were indeed helpful to improve the paper. We take this opportunity to address your concerns:
* **Q1: Clarify how you formally derive the intuition for the decision rule defined in Eqs. (2) and (3)?** We agree with your comments and we recognize that bypassing the intermediary thinking process resulted in a lack of clarity. We carefully redo the derivation of the proposed decision rule in the global response above (at the top). Furthermore, we would like to point out that if we simply consider Eqn1 as a modified scoring function, the proof and the strong empirical results still hold.
* **Q2: Would it help if you use the logits?** Using logits instead of softmax probabilities for HiE improves over the baseline, however, the improvement is lower compared to that using softmax. We provide the comparison below.
| | Top-1 | Avg. Mist | Hie Dist@1 | Hie Dist@5 | Hie Dist@20 |
|:-------:|:-----:|:---------:|:----------:|:----------:|:-----------:|
| Logits | 64.46 | 2.16 | 0.77 | 1.35 | 2.43 |
| Softmax | 64.61 | 2.15 | 0.76 | 1.33 | 2.19 |
* **Q3: L143-147 Consider the case when the fine-grained prediction is correct but the coarse-level prediction is wrong** We provide the mistake mismatch distribution on the iNaturalist dataset below. There are a few examples where fine-grained predictions are correct, while coarse-grained predictions are wrong (3.46%). In such cases, HiE may hamper the performance. However, that is only a small fraction compared to the more interesting scenario of coarse-correct and fine-incorrect (25.19%). We will add this discussion to the paper.
| Before HiE | Fine Correct | Fine Incorrect |
|------------------|--------------|----------------|
| Coarse Correct | 60.24 | 25.19 |
| Coarse Incorrect | 3.46 | 11.11 |
* **Q4: Mismatch between the fine-grained and the coarse classifier. And what is the distribution / the average of the depth of their LCA nodes** We provide the distribution in response to Q3. We provide the average LCA, between the predicted fine-grained and coarse-grained labels below (for each of the four cases)
| Before HiE | Fine Correct | Fine Incorrect |
|------------------|--------------|----------------|
| Coarse Correct | 1.00 | 1.65 |
| Coarse Incorrect | 4.07 | 3.07 |
* **Q5: L220-221: "training a separate network at the coarse label allows explicit disentanglement of features at the coarse and fine-grained levels"** We thank the reviewer for pointing this out. Indeed, we claim that the two models learn complementary features which can be combined using our HiE method. We will update the paper to avoid using the term explicit disentanglement.
* **Q6: Does the ± in the tables refer to standard deviation? How many runs were executed?** Yes, it indicates the standard deviation over 5 runs.
* **Q7: Sec. 4.4: Cascade and skipping Hierarchy** In the Cascade process, we apply HiE on level 2 using level 1 for coarse predictions. The updated level 2 predictions are then used as coarse predictions to apply HiE on level 3. The process repeats till the leaf node. \
Our approach does not necessarily require consecutive hierarchical levels. HiE results on level 7 of the iNaturalist dataset, directly using level 5 (omitting level 6) for coarse prediction are given below. It leads to noticeable gains over the baseline, however, the performance remains marginally lower compared to using level 6 for coarse classification.
| Method | Top-1 | Avg. Mistakes | Hier Dist@1 | Hier Dist@5 | Hier Dist@20 |
|------------|------:|--------------:|------------:|------------:|-------------:|
| Baseline | 63.70 | 2.38 | 0.86 | 1.96 | 3.24 |
| HiE (Level 6) | 64.61 | 2.15 | 0.76 | 1.33 | 2.19 |
| HiE (Level 5) | 64.49 | 2.16 | 0.77 | 1.36 | 2.09 |
* **Q8: Sec 5 Preliminary experiments on unified architecture** Compared to the baseline, unified architecture with a common backbone for feature extraction and separate classification heads for coarse and fine-grained level results in decreased performance for fine-grained classification (63.32% vs 63.70%) and improved performance on coarse classification (86.15% vs 85.30%). Applying HiE between the two classification heads improves the hierarchical metrics but results in lower top-1 accuracy.
| Method | Top-1 | Avg. Mistakes | Hier Dist@1 | Hier Dist@5 | Hier Dist@20 |
|------------|------:|--------------:|------------:|------------:|-------------:|
| Baseline | 63.70 | 2.38 | 0.86 | 1.96 | 3.24 |
| Unified Arch. | 63.32 | 2.20 | 0.81 | 1.50 | 2.62 |
| HiE Unified| 63.10 | 2.17 | 0.80 | 1.33 | 2.26 |
| HiE Separate | 64.61 | 2.15 | 0.76 | 1.33 | 2.19 |
**Missing Citation:** Thank you for pointing out this reference. We will add this to the paper. We would also like to point out that the method in [Ridnik et al., ImageNet-21K Pretraining for the Masses] is similar to [25], which we compare against in our experiments.
---
Rebuttal Comment 1.1:
Comment: Thank you, increased my score to 4 in light of the further details provided by the authors and will continue the discussion in the main response. | Summary: This method introduces a novel approach to achieve state-of-the-art fine-grained image classification. It addresses the issue of mistake severity by developing the Hierarchical Ensemble (HiE) loss, which effectively penalizes incorrect predictions of both course and fine-grained labels. The HiE loss combines the probabilities of predicting the course label and the fine-grained label, providing a joint probability measure.
The authors of this method provide a proof that accurately predicting the course label probability enhances overall accuracy. To validate their approach, they conducted extensive experimentation on two hierarchical datasets, comparing their method against various existing related works and three additional baselines. The evaluation metrics used include top-1 error, mistake severity, and hierarchical distance. The results clearly demonstrate that their method outperforms comparable approaches across all these measures.
Moreover, the authors conducted further experiments in a semi-supervised setting to assess the performance of their method when only 10% of labels are available. Remarkably, their approach exhibits significant performance improvements even under such limited label availability, highlighting its effectiveness and robustness.
In addition to the aforementioned experiments, the authors explored the impact of using different pretrained backbone models in their method. They conducted experiments with various hierarchical depths to evaluate the method's adaptability and generalization across different classification hierarchies. The results of these additional experiments further support the superiority and versatility of their proposed method.
Overall, this research presents a compelling method for fine-grained image classification, showcasing its state-of-the-art performance, reduced mistake severity, and its ability to improve accuracy even with limited labeled data. The comprehensive experimentation and analysis conducted by the authors demonstrate the effectiveness and versatility of their approach in various scenarios, making it a valuable contribution to the field of image classification.
Strengths: The text is very well written and easy to understand. It’s very clear what the problem is and how their method can be used to increase performance.
The authors conducted a comprehensive range of experiments to thoroughly evaluate the performance of their model.
The method demonstrates robustness by effectively utilizing any pretrained backbone model.
Weaknesses: Since I’m not as familiar with this method, it would have been helpful to have a clearer description of these metrics within the paper, as I had to refer to the related works to gain a better understanding.
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors:
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: I believe the authors appropriately discussed the limitations of their method.
The collection of course-grained labels in addition to fine-grained labels seems very costly but this is outside of the scope of this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their positive feedback.
**Evaluation Metrics:** Thank you for the suggestion. We will update the paper to discuss the evaluation metrics in more detail and make it self-contained. | Summary: This paper proposes a novel approach called Hierarchical Ensembles (HiE) to improve the performance of fine-grained classification by utilizing a label hierarchy and coarse-grained predictions at test-time. The method significantly reduces mistake severity while improving top-1 accuracy on benchmark datasets, achieving state-of-the-art results. The approach is also effective in the semi-supervised setting, bringing notable gains in accuracy and reducing mistake severity as training data decreases for fine-grained classes.
Strengths: Originality: The paper introduces a novel approach called Hierarchical Ensembles (HiE) that combines coarse-grained predictions and label hierarchy to improve the performance of fine-grained classification. This approach is unique and addresses the challenge of reducing mistake severity while improving accuracy in fine-grained classification. Therefore, the paper demonstrates originality in its proposed methodology.
Quality: The paper achieves state-of-the-art results on benchmark datasets by significantly reducing mistake severity and improving top-1 accuracy. The approach is effective not only in the supervised setting but also in the semi-supervised setting, bringing notable gains in accuracy. The paper also compares its approach with previous baselines and demonstrates superior performance. These factors indicate the high quality of the research presented in the paper.
Clarity: The paper provides a clear and concise explanation of the proposed approach, including the motivation, methodology, and experimental results. The authors effectively communicate the problem statement, the significance of their approach, and the experimental setup. The paper also includes figures and examples to enhance clarity. Overall, the paper is well-written and easy to understand.
Significance: The paper addresses the challenge of fine-grained classification, which requires domain expertise and large amounts of labeled data. By utilizing coarse-grained predictions and label hierarchy, the proposed approach significantly reduces mistake severity and improves accuracy. This has practical implications in various domains where fine-grained classification is important, such as image recognition and object detection. The paper's state-of-the-art results and compatibility with existing semi-supervised methods further highlight its significance.
Weaknesses: There are some problems, which must be solved before it is considered for publication. If the following problems are well-addressed, this reviewer believes that the essential contribution of this paper are important for fine-grained classification. The paper has some context inconsistency errors, for example, it refers to these semi-supervised models on line 233, but then refers to them as self-supervised models on lines 242 and 244. In addition, although the complementary method proposed in this paper can be applied to the off-the-shelf model, the overall innovation of the paper is insufficient.
Technical Quality: 4 excellent
Clarity: 3 good
Questions for Authors: This paper makes extensive and comprehensive experiments on the new method proposed by the author, which fully proves the effectiveness of the method. The innovations of the paper can be applied to many off-the-shelf models, but there are not enough of them. If more general structures could be proposed, this paper would be able to make a greater contribution to the field dealt with.
Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
Soundness: 4 excellent
Presentation: 3 good
Contribution: 3 good
Limitations: The authors adequately address their proposed limitations in previous models, as their proposed method is a complementary structure that can be applied to any existing model and can effectively improve the performance of the model on the dataset.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and would like to address their concerns below.
**Typo on lines 242-244:** Thank you for pointing out this inconsistency. We will update the paper to fix this. It should be semi-supervised on lines 242 and 244.
**Overall Innovation of the work:** We would like to point out that the key innovation in our work is proposing a theoretically principled method that can be applied off-the-shelf to a variety of models to improve the accuracy as well as reduce the severity of mistakes in the setting of hierarchical classification. We present comprehensive experiments, study and compare against several methods exploring hierarchical architectures, hierarchical loss functions and hierarchical embeddings. We find that, albeit minimal, the proposed approach achieves state-of-the-art performance, while providing advantages in terms of adaptability, reproducibility and simplicity of training. We believe that these findings are of significant value to the community. | Summary: This paper focuses on label hierarchy problems and proposes using Hierarchical Ensembles (HiE) of independently trained networks over coarse and fine-grained levels. The reported experimental results show that the proposed method can achieve comparable performance to a fully supervised baseline, even using merely 10 annotations for each fine-grained class on a large fine-grained image classification dataset encompassing 1010 classes.
Strengths: 1. The topic of this paper is well-introduced and the proposed approach is straightforward.
2. The proposed HiE utilizes label hierarchy to improve the performance of fine-grained classification at test time using coarse-grained predictions.
3. The proposed methods and experimental results are well presented and the manuscript is overall well organized.
4. Performance of the proposed method is promising according to the comparison experiments with other state-of-the-art studies.
Weaknesses: 1. The paper lacks a general investigation of hierarchical architectures and hierarchical embeddings.
2. The proof of the Theorem 3.1 is not convincing, thus making the overall scheme lack theoretical support.
3. The experimental setup of semi-supervised learning is not shown clearly, so it is not possible to judge whether the comparison of the proposed method with other methods is fair.
4. Some errors, such as why Figure 2 is 4 levels.
Technical Quality: 3 good
Clarity: 2 fair
Questions for Authors: Please give a detailed motivation for this paper and a detailed derivation of Theorem 3.1
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 2 fair
Contribution: 2 fair
Limitations: The subject of the paper is Test-Time Amendment, thus limiting the potential performance improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback. We address the reviewer's concerns and questions below:
**Motivation of the paper:** The primary motivation behind the paper is that training independent coarse and fine-grained help to learn complimentary features. We provide the thought process behind Eqn1 in the global response above (at the top). We show that a minimal post-hoc approach improves upon complex efforts exploring hierarchical architectures, hierarchical loss functions and hierarchical embeddings on two important problems of reducing mistake severity and semi-supervised learning. The proposed approach also provides significant advantages in terms of adaptability, reproducibility and simplicity of training.
**Detailed derivation of theorem:** The detailed derivation of theorem 3.1 is provided below.
We show that if we make a correct prediction at the coarse level, the proposed Hierarchical Ensemble (HiE) is guaranteed to improve the downstream predictions at the fine-grained classification task.
**Theorem 3.1:** Assuming, $Q = [q_1, q_2 , ..., q_{N_L}]$ and $R = [r_1, r_2 , ..., r_{N_{L-1}}]$ are the predictions obtained at the fine and coarse grained labels for a given input $x$, such that $\sum_{i=1}^{N_L} q_i = 1$ and $\sum_{i=1}^{N_{L-1}} r_i = 1$. Let $g$ and $g_{parent}$ be the ground truth labels at the fine grained and the coarse grained levels respectively.
Now assuming that the coarse label is correctly predicted by the coarse prediction network i.e. $\mathrm{argmax} (R) = g_{parent}$, we wish to prove that:
$$
\frac{q_g \mbox{ . } r_{g_{parent}}}{\sum_{j=1}^{N_L} q_j \mbox{ . } r_{j_{parent}}} \ge q_g \quad (1)
$$
Where L.H.S is the prediction for the ground truth class using HiE and R.H.S is the direct prediction of the fine grained network for the ground truth class.
**Proof:** The denominator iterates over the fine grained predictions and multiplies them with their parent's prediction score. This is equivalent to iterating over the coarse label predictions and multiplying with the sum of prediction scores for all its children. By rewriting the denominator, we obtain:
$$
\sum_{j=1}^{N_L} q_j \mbox{ . } r_{j_{parent}} = \sum_{j=1}^{N_{L-1}} r_j \sum_{i\in j_{child}} q_{i} \quad (2)
$$
Assuming, $\sum_{i\in j_{child}} q_{i} = z_j$
$$
\sum_{j=1}^{N_{L-1}} r_j \sum_{i\in j_{child}} q_{i} = \sum_{j=1}^{N_{L-1}} r_j \mbox{ . } z_j = R^T Z \quad (3)
$$
Invoking Holder's inequality, using $a=\infty$ and $b=1$, ($1/a + 1/b = 1$), we obtain:
$$
R^T Z \le \lVert R \rVert_{\infty} \lVert Z \rVert_{1} \quad (4)
$$
Since, $\lVert R \rVert_{\infty} = \mathrm{max} (R) = r_{g_{parent}}$ and $\lVert Z \rVert_{1} = \sum_{i=1}^{N_L} q_i = 1$. We can say:
$$
R^T Z \le \lVert R \rVert_{\infty} \lVert Z \rVert_{1} \le r_{g_{parent}} \quad (5)
$$
Given Eqn 5, we can conclude that:
$$
\frac{r_{g_{parent}}}{\sum_{j=1}^{N_L} q_j \mbox{ . } r_{j_{parent}}} \ge 1 \implies \frac{q_g \mbox{ . } r_{g_{parent}}}{\sum_{j=1}^{N_L} q_j \mbox{ . } r_{j_{parent}}} \ge q_g \quad (6)
$$
**Hierarchical Architectures and Embeddings:** We would like to point out that we sufficiently discuss hierarchical architectures and embeddings in the related work. We compare against four hierarchical architectures [8,11,25,6] and a hierarchical embeddings-based method [5]. We also reproduce two baselines (Cross-Entropy-H and HiE Self). \
Additionally, based on the suggestions provided, we experiment with a unified architecture with a shared feature backbone and only different classification heads for coarse and fine-grained classes. We also experiment with applying HiE over the predictions from the separate classification heads. The results are given below:
| Method | Top-1 | Avg. Mistakes | Hier Dist@1 | Hier Dist@5 | Hier Dist@20 |
|------------|------:|--------------:|------------:|------------:|-------------:|
| Baseline | 63.70 | 2.38 | 0.86 | 1.96 | 3.24 |
| Unified Arch. | 63.32 | 2.20 | 0.81 | 1.50 | 2.62 |
| HiE Unified| 63.10 | 2.17 | 0.80 | 1.33 | 2.26 |
| HiE Separate | 64.61 | 2.15 | 0.76 | 1.33 | 2.19 |
However, if the reviewer has additional suggestions for baselines/methods proposing hierarchical architectures and embeddings, we would be happy to incorporate it.
**Clarification of the semi-supervised learning setting:** In the semi-supervised setting (section 4.2, lines 225-228), we assume the availability of a lot of coarsely labelled samples and a small number of fine-grained samples. We experiment with reducing the number of fine-grained annotations, from 100 annotations per class to merely 10 annotations per class. We compare against several methods and show results with-without CRM and with-without HiE.
**Figure 2 caption:** We would like to clarify that Figure 2 refers to a 2 level hierarchy with 4 leaf classes. | Rebuttal 1:
Rebuttal: We describe the steps clarifying the derivation of Eqn 1 in the paper. We use slightly different notations for the sake of improved clarity.
> (X)
> / \
> / \
> / \
> (C) (F)
> Graphical model for separate classifiers trained on coarse and fine-grained labels.
> (X) is the input image, (C) and (F) are coarse and fine-grained labels, respectively.
Considering the given graphical model, by the product rule, we have:
$$
P(C, F, X) = P(C, F | X) \cdot P(X) \quad (1)
$$
By factorizing the graphical model we obtain,
$$
P(C, F, X) = P(C | X, \phi) \cdot P(F | X, \theta) \cdot P(X) \quad (2)
$$
From Eq. (1) and Eq. (2), we have,
$$
P(C, F | X) = P(C | X, \phi) \cdot P(F | X, \theta) \quad (3)
$$
From Eq. (3), we have conditional independence between the predictions of coarse and fine-grained classifiers, i.e.: $C \mathrel{\unicode{x2AEB}} F | X.$
Now, assuming access to a label hierarchy $\mathcal{H}$ between coarse and fine labels, we define the following score function between the coarse classifier's prediction for the $j^{th}$ coarse class $c_j$ and the fine classifier's prediction for the $i^{th}$ fine class $f_i$:
$$
S(f_i, c_j ; x ,\theta, \phi, \mathcal{H}) = P(f_i | x, \theta) \cdot P(c_j | x, \phi) \cdot \mathbb{1}_{\mathcal{H}}(c_j = \text{parent}(f_i)) \quad (4)
$$
Where $\mathbb{1}\_{\mathcal{H}}$ is the indicator function manifesting the label hierarchy. Assuming that each fine-grained label is assigned only a single coarse label and $c_{i_{\text{parent}}}$ = $\text{parent}(f_i)$, we can simplify the above equation to:
$$
S(f_i, c_{i_{\text{parent}}} ; x ,\theta, \phi) = P(f_i | x, \theta) \cdot P(c_{i_{\text{parent}}} | x, \phi) , \text{ for } i=1,2,...,N_L, \quad (5)
$$
We normalize Eq. (5) to make it a valid probability density function:
$$
P(f_i, c_{i_{\text{parent}}} | x ,\theta, \phi) = \frac{P(f_i | x, \theta) \cdot P(c_{i_{\text{parent}}} | x, \phi)}{\sum_{j=1}^{N_{L}} P(f_j | x, \theta) \cdot P(c_{j_{\text{parent}}} | x, \phi)} , \text{ for } i=1,2,...,N_L, \quad (6)
$$
Eqn (6) is equivalent to Eqn 1 in the paper. | NeurIPS_2023_submissions_huggingface | 2,023 | null | null | null | null | null | null | null | null |
Learning Topology-Agnostic EEG Representations with Geometry-Aware Modeling | Accept (poster) | Summary: The work proposes a cross-dataset cross-electrode montage deep-learning pretraining method. As a model, they use a transformer that uses the electrode coordinates as positional encodings. The transformer model then processes differential entropy features per electrode as tokens with attention across all electrodes. Additionally, it has tokens for 17 predefined brain regions with attention restricted to attention between a region and electrodes within that region as well as between regions. The transformer is pretrained using a masked autoencoder framework, where the transformer predicts differential-entropy values of randomly masked-out electrodes. Masked-out electrodes are either chosen completely randomly or all electrodes of randomly chosen brain regions are masked out. The work evaluates both cross-dataset and cross-montage experiments, showing improvements from their chosen pretraining scheme.
Strengths: The approach has a clear motivation to do cross-montage pretraining, which is a potentially very useful capability. The method is straightforward and mostly easy to understand. Evaluation setup is interesting. Ablation studies are also useful.
Weaknesses: I did not see how many random seeds are used to obtain the results. A lot of times accuracy differences are relatively small and may be impacted by random noise from the training process. Ideally, results should be obtained by averaging results from multiple seeds and this should be reported in the paper.
Another ablation where you add 17 artificial tokens, but they are not region-specific, so they have full attention to all electrodes and all other artificial tokens would be necessary to distinguish improvements from model capacity and region-specificness.
Table 2 Pre-trained dataset should be pre-training dataset
A lot of the fonts/text in figures is hard to read, especially Figure 1 d), but also others.
“subjects(8 f” -> missing space on p.6 l 215
Technical Quality: 3 good
Clarity: 3 good
Questions for Authors: For attention, this was a bit confusing to me:
““As a new hierarchy is introduced and only the information from region-wise nodes is used to reconstruct the origin nodes, the unified representation must be capable of both intra-region and inter-region reconstruction. “”
As far as I understand Fig 1.d) there is attention from all electrodes to all electrodes, from all regions to all regions, and from any region to all electrodes of that region? Sentence above made it sound like maybe first part (all electrodes to all electrodes) doesn’t exist? Also please write this explicitly in the text in 3.3.2, so it is more clear.
Regarding the positional encoding, how does it relate to the positional encoding from the VIT Vision transformer? Please do mention also in the text where it is different or similar.
In Figure 3, all scalp plots of the original column look very similar to me. Could one find a more diverse set of examples?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 3 good
Contribution: 3 good
Limitations: Ablation study as written above might be more helpful to understand contribution of different parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and suggestions, which are valuable for enhancing our paper. The following are responses to individual concerns:
1. **Random seeds (W1)**: We follow the previous work [1,2] to perform subject-dependent classification with five random seeds to reduce the effects of random noise. We will clarify this in the revised version.
2. **Comprehensive ablation study on region-specificness (W2)**: Thanks for your suggestion! Although we have investigated the region-specificness in spatial modeling by shuffling the attention between region-wise tokens and all electrodes (Table 1 in Supplementary file, 17 Chaos and 17 default), it is inspired that we can add 17 artificial tokens without any region specification to get another ablation. The result is shown in the following table (full attention). Our design still outperforms the one with full attention, further validating the effectiveness of region-specificness.
| Attention setting| Fine-tune Accuracy on SEED(%) |
|----------|----------|
| ours | 94.61 |
| chaos | 93.96 |
| full attention | 94.08 |
3. **Text and figure suggestions (W3, W4, W5)**: Thanks for the suggestions about the text and figures. We went through our manuscript and revised all the typos we could find accordingly. Besides, we polished our figures for better readability, and you can find them in the attached PDF file.
4. **Information Flow and Attention configuration (Q1)**: Our framework generally follows the encoder-decoder / encoder-classifier schema, where the multi-channel EEG data are first encoded by the encoder, then *only the region-wise nodes* are used for reconstruction / downstream task with the decoder and the classifier. The quoted sentence *"As a new hierarchy is introduced, and only the information from region-wise nodes is used to reconstruct the origin nodes, the unified representation must be capable of both intra-region and inter-region reconstruction."* is used to explain how we design such a bottleneck, where we only use the unified representation from encoders to reconstruct EEG channels in decoders. As for the attention configuration, the first part (all electrodes to all electrodes) exists in the forward process of both encoder and decoder layers, though the origin nodes have been re-masked to zeros before feeding to the decoder. We will make these more explicit in the revised version.
5. **Difference between positional encoding in ViT and ours (Q2)**: ViT incorporates three kinds of positional encoding: 1-D, 2-D, and relative. Multi-dimensional Position encoding is similar to 2-D position encoding, while the (x, y) denotes different things in these two concepts. The former use (x, y) to denote the physical location of EEG electrodes on the scalp, while the latter use (x, y) to denote a location in grids. We will clarify it in the revised version.
6. **More diverse examples (Q3)**: We will provide a diverse set of examples in the revised version, and a preview can be referred to Figure 2 in the attached PDF file.
[1] T. Song, W. Zheng, P. Song and Z. Cui, "EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks," in IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532-541, 1 July-Sept. 2020, doi: 10.1109/TAFFC.2018.2817622.
[2] Rui Li, Yiting Wang, Wei-Long Zheng, and Bao-Liang Lu. 2022. A Multi-view Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-supervised Learning. In Proceedings of the 30th ACM International Conference on Multimedia (MM '22). Association for Computing Machinery, New York, NY, USA, 6–14.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses and additional effort put into strengthening the contributions of the study. Due to the additional ablations and additional TUEG study, I increase my score to strong accept.
---
Reply to Comment 1.1.1:
Comment: we greatly appreciate the reviewer's positive response to our revision and we will polish the manuscript accordingly. Again, we sincerely thank you for raising the rating score regarding our paper. | Summary: This paper aims to provide new architecture that perform pre-training for scalp EEG to utilize the large-scale unlabelled data. However, one of the challenge of performing such pre-training is the different sampling channels selection & inherent structural and spatial information across different EEG datasets. Thus, this paper aims to break such boundaries by mapping different channel selection setup into a unified topology/spatial map of EEG electrodes. Based on this inspiration, the paper proposed MMM as a pre-training strategy with multi-dimensional position encoding and multi-level channel hierarchy based on a unified topology to extract representation from EEG. The paper used many experiments on emotional detection to validate their proposed method.
Strengths: 1. The paper aims to address a very important problem. The difference of channels selections for scalp EEG is one critical issue/limitation for current studies.
2. The proposed method is intuitive but novel, the multi-scale channel hierarchy is carefully and nicely designed. It is a valid/meaningful contribution to the community.
3. The proposed method achieves adequate performance improvements over the benchmark.
4. The writing and organization is clear. The paper is enjoyable to read.
Weaknesses: Minor concerns below:
1. Aside from "montage (the number and the places of electrodes placed on the scalp) and sample rate" as mentioned in line 24-25, another issue that prevents methods from being transferrable to another dataset for scalp EEG might be the type of electrodes. e.g. wet electrodes v.s. dry electrodes, and the corresponding domain shifts of collected signals. While this is not the major issue to be tackled in this work, it'd be great if the authors can mention/discuss this factor (and other possible factors) in either the introduction or the conclusion section.
2. The related work section is not 100% complete. Some arguments are not accurate and could use some edits.
- Aside from [6] and [7], https://arxiv.org/abs/2007.04871 [Subject-Aware Contrastive Learning for Biosignals] also explored spatial augmentations for EEG data.
- "To the best of our knowledge, there is no method trying to use cross-dataset EEG as their pre-training corpus.", in [BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data] section 2.1.1 "This also means that these data should include multiple different recording hardware and configurations." Although they eventually used TUEG dataset as the pre-training dataset (which is defaulted to be the same configuration, but due to the TUH data collection pipeline the actual data contain many variations), so I am not sure if this should be counted as "cross-dataset pre-training", this should be discussed.
- Overall, the section "Self-supervised learning for EEG" should include more related works. The authors should spend more efforts doing the literature search on this front.
3. The experimental details could be more comprehensive, some of the details should be included within the main text.
- Is the train/validation/test split randomly created or based on different subjects? Correct me if I am wrong, but I did not find related details in both the main text and the appendix. Is there k-fold cross-validation for different subjects? SEED is a relatively small dataset, so I'd expect such a dataset manipulation would not be too computationally expensive.
4. Minor writing issues. e.g. line 98 "It's" --> "It is".
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: The experiments focused on emotional detection as the major task. While in EEG, sleep staging datasets contain much more abundant unlabelled (or labelled, in some cases) publicly available datasets that could be used for pretraining. Have the authors considered that dataset, as the proposed method is supposed to work well w cross-dataset EEG?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: The authors should discuss more limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and suggestions, which are valuable for enhancing our paper. We feel excited that our paper provided a pleasant reading experience and that our contributions are well recognized.
Responses to your concerns and questions are hereby presented:
1. **Discussion on more transferring issues (W1)**: Good points! It is inspired to consider the type of electrodes. Besides that, inconsistent time scope and feature pre-processing are all factors that prevent transfer between tasks. These are all the core challenges in front of the EEG pre-training area, and we will add the discussion to the limitation section in the revised version. While these challenges are beyond the scope of the paper, we hope this work can provide a powerful tool to tackle the cross-montage problem and inspire the related community.
2. **Additional related work about spatial modeling with EEG (W2.1)**: Thanks for the suggestions. [Subject-Aware Contrastive Learning for Biosignals] incorporates time, frequency, and spatial augmentations on EEG data while using them on contrastive learning. The method also works on a fixed channel set, not faced with cross-montage problems. We will cite it with a few discussions.
3. **Additional related work about self-supervised training with EEG (W2.2)**: Thanks for the suggestions. BENDER work on cross-dataset pre-training by selecting a 19-channel configuration, based on which they pad missing channels to 0 and drop surplus channels during the data preparation stage. Unlike our work, they have not yet tackled the cross-montage problem from the model's perspective.
When faced with 62-channel data (SEED series in our experiment), BENDER will only keep the 19 channels in the 10-20 system and ignores the rest. This will lead to huge information loss, and our method is working on utilizing full spatial information faced with different channel configurations. We will cite BENDER and add a few discussions in the revised version.
3. **Experiment settings (W3)**: We follow the experiment setting of the previous work [2, 3], where the results are from the average results for each patient. It is noticed that all patients share the same number of samples, and samples in the first nine trials are treated as training samples, while samples in the last six trials are treated as test samples. The detail of the dataset can be referred to SEED [4], and we will clarify the details of the dataset in the revised version.
4. **Text suggestions (W4)**: Good catch! We went through the whole manuscript and revised all the typos we found.
5. **Larger dataset for pretraining (Q1)**: We have handled the cross-montage problem. But it is still non-trivial to do broader cross-task transitions because of challenges mentioned in Response **Discussion on more transferring issues**, such as inconsistent time scope, task-specific feature pre-processing, etc. Some of these challenges are beyond the scope of this paper. Despite the challenges, we still manage to pre-train our model on a large-scale EEG dataset, the TUEG dataset [1]. To be noticed, we maintain a similar experiment setting for the pre-training and fine-tuning, except for pre-training with 21 channels in the 10-20 international system. The fine-tuning result on SEED is shown in the following table. Even though there are huge gaps in sampling devices, subjects' physiological status, and so on, we outperform the model pre-trained on the downstream dataset by bringing massive pre-training data.
| Pre-training dataset (pre-training epoch)| Fine-tune Accuracy on SEED(%) |
|-----|-----|
| random initialization | 94.59 |
| TUEG (1 epoch) | 94.85 |
| TUEG (2 epoch) | 95.11 |
| SEED | 95.15 |
| TUEG (3 epoch) | 95.29 |
[1] Harati A, Lopez S, Obeid I, et al. The TUH EEG CORPUS: A big data resource for automated EEG interpretation[C]//2014 IEEE signal processing in medicine and biology symposium (SPMB). IEEE, 2014: 1-5.
[2] T. Song, W. Zheng, P. Song and Z. Cui, "EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks," in IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532-541, 1 July-Sept. 2020, doi: 10.1109/TAFFC.2018.2817622.
[3] Rui Li, Yiting Wang, Wei-Long Zheng, and Bao-Liang Lu. 2022. A Multi-view Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-supervised Learning. In Proceedings of the 30th ACM International Conference on Multimedia (MM '22). Association for Computing Machinery, New York, NY, USA, 6–14.
[4] Zheng W L, Lu B L. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks[J]. IEEE Transactions on autonomous mental development, 2015, 7(3): 162-175.
---
Rebuttal 2:
Title: Response to rebuttal
Comment: Thank you for providing detailed response. I have read through the rebuttal and other reviewer's comment, and tend to remain my score the same. While the additional experiments improved the paper's contribution, I think it only better supports the original claim, which I am already convinced of.
---
Rebuttal Comment 2.1:
Comment: We sincerely value the reviewer's acknowledgment of our contribution and will update the related work section accordingly. | Summary: The paper proposed an innovative approach to the pre-training of models for the EEG data. Large-scale pre-training which demonstrated great potential in CV and NLP requires a substantial amount of data. While EEG data is relatively easy to collect, their interpretation and labelling often require substantial expert efforts. Although the integration of various datasets can address this concern, the differing electrode configurations among these datasets can result in domain shifts and dimension mismatches.
The paper introduces a framework named MMM, which stands for Multi-dimensional position encoding, Multi-level channel hierarchy, and Multi-stage pre-training strategy. The novelty lies in the mapping of all EEG channel selections onto a unified topology, enabling the development of a pre-training framework to learn unified, geometry-aware EEG representations that can generalize across different EEG channel configurations.
In their approach, the authors encode the spatial information into the representation and develop a method that allows pre-training with an EEG corpus having various sensor configurations. This is achieved through the concept of region-wise tokens, which form a multi-level hierarchy learning from the EEG channels. These tokens eventually form a unified representation that can be applied to downstream tasks.
Additionally, the paper proposes a multi-dimensional position encoding technique to encapsulate geometric sensor information and a multi-stage mask strategy involving random and region-wise masks designed to enhance the robustness of hidden representations.
The authors validate their framework against a wide range of state-of-the-art methods on EEG emotion recognition tasks. Their experimental results indicate that the proposed method not only achieves state-of-the-art performance but also exhibits a strong ability to transfer across different datasets, even those with differing montages. The author also conducted an ablation study, underscoring the validity and robustness of each proposed component, contributing to the MMM framework's overall effectiveness.
Strengths: 1. This manuscript makes an innovative contribution to the field by addressing the crucial issue of sensor configuration heterogeneity, which is a significant barrier to enabling large-scale pretraining using multiple EEG datasets. The novelty of the approach opens a plethora of opportunities for future research in EEG decoding and is commendable.
2. The presentation is excellent, with cohesiveness that makes the paper feel complete and thorough. All elements, from figures to text, complement each other well, creating a pleasant reading experience. The figures are particularly well designed, effectively illustrating the complex method in an understandable way. The choice of colour scheme enhances the visual appeal and readability of the work.
3. The manuscript's strength also lies in the clear articulation of motivation and contributions in the introduction. The narrative throughout the methodology section is seamless, explaining the proposed method with great clarity and depth.
4. The rigorous experimental validation of the findings is highly appreciated. The authors included comprehensive experiments covering classification tasks with various state-of-the-art methods based on RNN, CNN, and GNN. The work's transferability across different datasets and electrode configurations is also well demonstrated. In addition, the authors performed a detailed ablation study to validate the significance of each proposed component, further strengthening the work.
5. The inclusion of qualitative results, such as the topographic plot of the reconstructed data, is valuable. The results demonstrated the superior performance of the proposed MMM method in reconstructing missing channels at high mask ratios.
6. Overall, this paper makes a significant contribution to the field and is likely to inspire future work. The quality of presentation, the novelty of the approach, and rigorous experimentation all stand out as notable strengths of the work.
Weaknesses: 1. While the overall design and information presentation in Figure 1 is commendable, it might be beneficial to consider further optimizing the text sizes for enhanced readability. There are noticeable empty spaces that could potentially be utilized to enlarge the text, improving the figure's overall clarity and effectiveness.
2. The manuscript could benefit from additional clarification regarding the reconstruction experiment, particularly in terms of the masking method employed. Are global random masking or regional masking methods used, or perhaps a combination of both? Exploring potential performance differences between the two masking methods would be an insightful addition. Furthermore, it would be beneficial to provide a comparison of the Mean Squared Error (MSE) between different masking methods to better understand the relative performance of the proposed method.
3. When considering the transferability experiments across different datasets and montages, it might be prudent to include the state-of-the-art methods that were employed in the regular classification task. These methods can be trained on the common channels between different datasets, aligning with conventional practices in the literature. Additionally, incorporating established EEG transfer learning techniques as comparisons for these experiments could offer a more comprehensive view. This inclusion will make the results more comparable to existing literature and potentially broaden the generalizability of the findings.
Technical Quality: 3 good
Clarity: 4 excellent
Questions for Authors: 1. For the Reconstruction experiment, is the mask uses global random masking or regional masking?
2. Will the code be made available?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 3 good
Presentation: 4 excellent
Contribution: 3 good
Limitations: please refer to the weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments and suggestions that truly enhanced the quality of our paper. We are cheerful that we have provided you with a pleasant reading experience, and our contribution is well recognized.
Responses to your concerns are presented as follows:
1. **Text size of Figure 1. (W1)**: Thanks for your suggestion, Figure 1 will be revised in the next version, and a preview is available in the attached PDF file.
2. **Details of multi-stage mask strategy (W2, Q1)**: The global random and regional masking methods are used alternatively during pre-training, whose details are in the supplementary (Line 20-23). We will clarify this setting in the manuscript for higher readability.
Moreover, thanks for the suggestion on adding the ablation study that uses MSE to measure different masking strategies while we are now using downstream task performances. (Supplementary Material section 3.3). The results are shown in the following table and confirm our statement that the appropriate mix rate of mask strategies can bring better representations (Line 90 in Supplementary Material). We will add these experiments to the revised version of our supplementary file.
| Ratio of Random Masking |Ratio of Regional Masking| Mean Square Error($\times10^{-4}$) |
|----------|----------|----------|
|0.3|0.7|8.26|
|0.5|0.5|8.07|
|0.7|0.3|8.12|
3. **Baseline trained on the common channels (W3)**: It is inspired to train models on the common channels to avoid inconsistent channel configurations. Previous work [1] has investigated the effectiveness of focusing on common channels. We also conduct more experiments with MAE, MMM, and DGCNN, as shown in the following table. To simplify the table, the [Pre-train, Fine-tune, Test] columns denote the number of channels used in the corresponding stage. The results show that the information loss caused by focusing on the common channels can not be ignored. With more tasks, the common channel would become less, and more information loss would occur. That is one of the motivations for our method, which can handle different channel configurations.
| Method |Pre-train|Fine-tine/train |Test| Accuracy (%)|
|----------|----------|----------|----------|----------|
|MMM|/|62|62|93.76|
|MMM|62|62|62|94.61|
|MMM|32|62|62|93.97|
|MAE|/|62|62|87.89|
|MAE|/|32|32|83.25|
|DGCNN|/|62|62|90.04|
|DGCNN|/|32|32|83.25|
4. **Code availability (Q2)**: We will publish both the code and the pre-trained model upon acceptance.
[1] Kostas D, Aroca-Ouellette S, Rudzicz F. BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data[J]. Frontiers in Human Neuroscience, 2021, 15: 653659.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications and responses to my comments.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer's positive feedback and recognition of our efforts. | Summary: This work seeks to improve classification tasks over EEG brain data using pre-training. There exist many EEG datasets, but not all datasets use the same montage format. To leverage all the datasets together, this work presents an approach for learning generic representations of EEG data. This approach proceeds in two stages. In the first stage, EEG data is given as input to a Transformer-based autoencoder. The bottleneck for this autoencoder is a small set of region-nodes, which are meant to represent the regions of the brain. In the second stage, the representations for these region nodes are used for fine-tuning on a classification task. The benefit of this method is that the region representation is agnostic to montage format.
Strengths: - This work sets out to address an important question: how do we handle the fact that there is a lot of EEG data out there, but not all of the same montage format? Can we pre-train over all this data at once?
- The presented method is a reasonable first approach to this problem.
Weaknesses: - My main concerns are clarity and the significance of the results
- The aim of this work is to show the ability to transfer between different datasets, but as described in 4.1, all the datasets considered seem to have a very similar source. The most different datasets, the Lite datasets, are created by extracting data from the other datasets being considered. It would strengthen this work if a wider variety of sources was considered. Or perhaps it could be better explained why the present datasets are more different than they might appear.
- The results presented in 4.5 show inconsistent and small improvement when transferring between montages.
- I have some doubts about the underlying theory in this work. It's claimed that there is a hierarchy in the representation, because independent region nodes are given as input to the model. But there is nothing about the architecture that constrains region information to pass through the region nodes.
- There are many small typos throughout. For example, sentence 1 of the abstract should probably read: "Large-scale pre-training has shown great potential to enhance models on downstream tasks in vision and language". Sometimes these typos impede clarity. For example, line 176 refers to a "symmetry stack", and it is unclear whether this denotes a specific type of transformer, or if it just means that the decoder is symmetrical to the encoder. Lines 38 and 39 have similar typos that make reading difficult. I definitely don't mean to pick on the English writing, and overall the sentences are mostly clear, but I think the clarity would be greatly improved if it was given a close reading for typos.
- The ablation study doesn't contain an experiment where the positional encoding is ablated. It would be useful to see this, since it's a key part of the method.
- Similarly, no experiment is done where Multi-level channel hierarchy is the only component missing
- I have other concerns about clarity (see questions below)
Technical Quality: 2 fair
Clarity: 1 poor
Questions for Authors: - In general, how do you think this method would perform in the case where you have two datasets, but the set of regions present in each dataset is different? For example, dataset A might contain recordings for regions 1, 2, and 3. And Region B might only contain recordings on regions 4, 5, and 6.
- Can you say briefly what the main difference between MAE and MMM is?
- On line 151, it mentions the (x,y) coordinates of the nodes. What are these coordinates relative to? Every scalp is different, so were projections made to a common scalp?
- How is the Lite series created? How are the 32 channels selected? Are they the same 32 channels for every subject?
- On line 233, why does it say that the method only uses one-tenth of the information?
- I'm not sure how to understand the experiments in section 4.2? Are these the results when training is restricted to a single subject? Which subject?
- Why are there two seemingly identical models in Table 1? I'm looking at the third-from-last and second-from-last rows?
- In table 1, what is the standard deviation taken over?
- I'm not sure how to read the text in section 4.4 and table 3. What does the "SEED (Fine-tune)" row denote? Aren't the other models also finetuned on SEED?
- On line 261 it says that "We pre-train models using the Lite series dataset", but then table 3 seems to suggest that different pre-training datasets are used. For what models was the Lite series used?
- The text in 4.4 makes comparisons to the "model trained from scratch" but all the rows in the table show deltas with respect to "SEED (Fine-tune)." I suggest showing the deltas with respect to the model trained from scratch.
- On line 265, it says that the performance drop is due to the fact that SEED-IV-Lite only has 32 channels. But isn't this the case for SEED-Union-Lite dataset, which shows an increase in performance?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 1 poor
Contribution: 2 fair
Limitations: Limitations and potential social impacts are not discussed. But for this work, I think this is alright.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable suggestions, which help us enhance the clarity of this paper and make it more understandable for the wider community. We reviewed your suggestions, and our manuscript to ensure all typos, vague descriptions, and unclear settings we can find (W1, W5, Q4, Q6, Q8) are properly addressed in the revised version. Please kindly refer to the global rebuttal for more details.
For individual concerns, we respond as follows:
1. **Wider variety of sources (W2)**: SEED and SEED-IV contain different subjects and stimulating materials, which are non-trivial within the EEG area. SEED and SEED-Lite have different montages. Although they're designed for the same task, as we know, there is no relevant work on these two kinds of transition.
Despite challenges, we pre-train our model on a large-scale EEG dataset, the TUH EEG Corpus [2]. We maintain a similar experiment setting for the pre-training and fine-tuning, except for pre-training with 21 channels. Please kindly refer to the global response for details and results.
2. **Inconsistency and small improvements in section 4.5 (W3)**: The ablation study is based on the setting, pre-training on SEED / SEED-Lite, and fine-tuning on SEED. For inconsistency, it suggests that the multi-stage mask strategy aids in enhancing the model's generalizability across various datasets (Lines 282-285). As for the number, a 3.88% downstream accuracy increment is shown for MMM pre-trained on SEED-lite with full components to the one with no component. We beg to differ in that it is not a small improvement.
3. **Hierarchy and constraints (W4): Hierarchy consists of region-wise nodes**, EEG channels, and their communications. We have constrained the region-wise node not to interact with EEG channels out of its region. Compared with constraints on cross-region channel interaction, the scheme in the paper can model the EEG signal more efficiently without smashing the hierarchical structure.
4. **Typos (W5)**: Thanks again for pointing out the typos. The phrase "symmetry stack" means "symmetrical to the encoder".
5. **Comprehensive Ablations (W6, W7)**: We validate the effectiveness of positional encoding by comparing the second row in Table 4. A similar comparison can be made between the second and third rows for multi-level channel hierarchy.
Adding multi-stage pre-training without a multi-level channel hierarchy is not very sensible. However, we conduct additional experiments for rigor, as shown in the table.
|Multi-dimensional PE | Multi-level Channel Hierarchy| Multi-stage Pre-training| Accuracy on SEED (%) |
|-|-|-|-|
|1|0|1|91.93|
|0|1|1|95.12|
|1|1|1|95.15|
6. **Non-overlapping case (Q1)**: It is right to worry that our method cannot handle this extreme case without overlapped channels. But most datasets have channels uniformly distributed on the scalp, such as TUEG [2], Sleep-edf [3], etc. And the majority use subsets of the 10-20/10-10 international system [1]. Even though the extreme case happens, we could practically introduce the C dataset, which overlaps A and B.
7. **Main difference between MAE and MMM (Q2)**: Table 4 has shown that the main difference between MAE and MMM comes from a multi-level channel hierarchy, i.e., 1) replacing the class token with region-wise tokens for a spatial-aware representation (Line 168-169). 2) reconstruction with region-wise tokens instead of the visible channels to enforce the capability of the representation (Line 176-178).
8. **Coordinates for position encoding (Q3)**: (x,y) coordinates are relative to the channels' position in the 10-10 international system [1] (Line 152). Thanks to the unified system, most EEG-related research does not worry about different scalps.
9. **Creating Lite series (Q4)**: Lite series is created by evenly selecting 32 channels from 62 channels. The selected channels remain the same across subjects.
10. **One-tenth information (Q5)**: MV-SSTMA uses ten-timestep information, while we only use one timestep (shown in Table 1 "Window Size").
11. **Subject-dependent Classification (Q6, Q8)**: We follow the previous work [4, 5] to perform subject-dependent classification. Each subject has 15 recorded trials with emotion labels. The first 9 are for training, and the rest are for testing. Each subject has its own training and test sets, and the reported results are the average for each subject. The standard deviation is taken over the result from each subject.
12. **Seemingly identical models (Q7)**: The difference is the dagger symbol (†), representing excluding the worst-performing session out of three for each subject when calculating the accuracy (refer to the caption of Table 1). This brings fair comparisons following different previous settings, i.e., results with/without the dagger symbols are directly comparable.
13. **Explanation on Section 4.4 (Q9~11)**: Section 4.4 compares models with different aspects: 1) method (MMM or MAE). 2) fine-tuning setting (fully or partially fine-tuning). 3) pre-training dataset (none, SEED, Lite series). The "SEED (Fine-tune)" row denotes the model pre-trained and fine-tuned on SEED. The rest denotes the model partially fine-tuned. i.e., the parameters of the encoder are frozen, and only the last MLP is tuned (Line 262).
We will take your advice to show the deltas w.r.t model trained from scratch for better presentation.
14. **Line 265 (Q12)**: SEED-Union-Lite combines SEED-Lite and SEED-IV-Lite (Line 223-224) and more data bring better performance.
[1] Homan R W, 1988. The 10-20 electrode system and cerebral location
[2] Harati A et al., 2014. The TUH EEG CORPUS: A big data resource for automated EEG interpretation
[3] Kemp B et al., 2018. Sleep-edf database expanded
[4] Song T et al., 2020. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks
[5] Li R et al., 2022. A Multi-view Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-supervised Learning.
---
Rebuttal Comment 1.1:
Title: Response to Authors; increased score
Comment: I thank the authors for their thorough response. I will increase my score to a 7 for the following reasons:
- (W1) I had misunderstood the contents of SEED and SEED-IV. I now understand that they contain different subjects.
- (W2) Fair enough. It is a modest but significant improvement.
- (Q1, Q3) Given these facts that commonly hold for EEG data, I do not think sparsity or coordinate consistency pose a serious weakness.
- (W4) I had misunderstood how nodes interact with each other. This seems like a reasonable scheme.
I still recommend that the paper be given a close reading for typos.
---
Reply to Comment 1.1.1:
Comment: We are pleased with the positive feedback and will address the typos in our revised version. Thank you for raising the rating score. | Rebuttal 1:
Rebuttal: Global response:
We're cheerful that the reviewers found our method novel (Reviewer rN6e, JXz6, JKwr), well motivated and reasonable (Reviewer tt92, JXz6, JKwr, v1Fw), and important (all reviewers). We're also delighted that reviewers feel reading our manuscript is pleasant (JXz6), joyful (JKwr), and easy-to-understand (v1Fw).
A major concern most reviewers share is the absence of a wider variety of sources. Even though we have handled the cross-montage problem, it is still non-trivial to do broader cross-task transitions because of some challenges, such as the type of electrodes(Reviewer JKwr), inconsistent time scope, task-specific feature pre-processing, etc. Some of these challenges are beyond the scope of this paper. Nevertheless, we still manage to pre-train our model on a large-scale EEG Corpus, the TUH EEG Corpus (TUEG) [1], and maintain the same pre-training/fine-tuning setting as in the main paper, except for pre-training models on 21 channels of EEG data. The results are shown in the following table, which further validate the effectiveness of our method on cross-montage problems.
| Pre-training dataset (pre-training epoch) | Fine-tuning Accuracy on SEED (%) |
|-----|-----|
| random initialization | 94.59 |
| TUEG (1 epoch) | 94.85 |
| TUEG (2 epoch) | 95.11 |
| SEED | 95.15 |
| TUEG (3 epoch) | 95.29|
Another concern is the clarification of the paper. We thank for reviewers for their feedback. We are trying to address the concerns and polish our paper in the revised version to make our work better understandable to the wider community. Specifically:
1. We fix all the typos we found throughout the manuscript, including space before brackets, expansion of abbreviations, misspellings, etc.
2. Figure 1 now has a better text font size. Refer to Figure 1 in the attached PDF file.
3. Figure 3 now has more diverse examples. Refer to Figure 2 in the attached PDF file.
4. We add more explicit details of our experiment setting.
5. We add discussions on the remaining challenges for cross-task EEG training to the limitation section.
Finally, we would like to express our great appreciation and excitement that several reviewers recognize our promising potential to inspire the EEG community for further exploration. We want to emphasize that our work contributes to the community not only by proposing a particular model working well with the EEG emotion recognition task but also by providing a paradigm of self-supervised learning which can tackle cross-montage problems as the first step of large-scale EEG pre-training. We believe that it is worth publishing to stimulate further discussion.
[1] Harati A, Lopez S, Obeid I, et al. The TUH EEG CORPUS: A big data resource for automated EEG interpretation[C]//2014 IEEE signal processing in medicine and biology symposium (SPMB). IEEE, 2014: 1-5.
Pdf: /pdf/f280c9d0726114b71ac474668a20f0de3cce7f1a.pdf | NeurIPS_2023_submissions_huggingface | 2,023 | Summary: This paper explores the application of large-scale pre-training techniques to scalp electroencephalogram (EEG) data. They leverage the abundance of unlabeled EEG data and address challenges related to sampling channel selection, structural information, and spatial information. To enable cross-dataset EEG pre-training, they propose a unified topology that maps different channel selections. They introduce MMM, a pre-training framework with multi-dimensional position encoding, multi-level channel hierarchy, and a multi-stage pre-training strategy based on the unified topology. Experimental results demonstrate significant improvements over previous state-of-the-art methods on benchmark datasets for emotional recognition.
Strengths: - The authors present a novel pretraining method for cross-dataset pretraining on EEG signals, which proves to be an effective method for improvements for emotion recognition. This framework seems to be promising in providing researchers in the EEG space with larger datasets for pretraining.
Weaknesses: - Although the authors present a pretraining method for cross-dataset EEG signals, they only combine the SEED and SEED-IV datasets. It would be extremely enlightening if the authors combined N > 2 datasets to see if their pretraining framework generalizes across them.
- I am a bit confused on how the emotion recognition accuracy can be so high while the loss seems to not converge well and topological maps (Figure 3) seem to not be reconstructed well. Would the authors be able to clarify this phenomenon?
Technical Quality: 2 fair
Clarity: 2 fair
Questions for Authors: - I am curious on whether this pretraining method can work on different downstream tasks, such as EEG to text translation. Have the authors considered this?
- The improvements/decrease in performance of adding Multi-stage Pre-training is seen for SEED and SEED-Lite datasets. How computationally expensive is the multi-stage pre-training? I am trying to see if this tradeoff is worth the slight, possible improvement it may give.
- For Figure 3, the topological maps created by MAE seems to be quite different from the ground truth. Additionally, the topological maps created by MMM seems to be slightly better than MAE but still pretty different from the ground truth. Figure 4 also seems to present MMM having a better loss reconstruction than MAE, however, both seem to converge at a mask ratio of 0.5. I also want to note that Table 2 in the supplementary material, the random masking ratios seems to not have much difference for the performance. Have the authors considered trying different loss functions to observe distinct behaviors in loss?
- Please correct me if I am wrong, but it seems like the paper does not clarify whether the results in the paper are from the averaged EEG signals across all patients or an average of the results for each individual patient. Would the authors be able to kindly clarify a bit more?
Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
Soundness: 2 fair
Presentation: 2 fair
Contribution: 3 good
Limitations: The authors adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments and suggestions that truly enhanced the quality of our paper.
1. **Cross-task challenge (W1, Q1)**: We have handled the cross-montage problem. But it is still non-trivial to do broader cross-task transitions because of challenges such as inconsistent time scope, task-specific feature pre-processing, etc. Some of these challenges are beyond the scope of this paper.
Despite this, we still managed to pre-train our model on a large-scale EEG Corpus, the TUH EEG Corpus (TUEG) [1], designed for several medical tasks, including seizure detection, slow wave detection, etc., and fine-tune it on the SEED dataset. To be noticed, we maintain a similar experiment setting for the pre-training and fine-tuning, except for pre-training with 21 channels in a 10-20 system. The fine-tuning result on SEED (62 channels) is shown in the following table. Even though there are huge gaps in sampling devices, subjects' physiological status, and so on, we outperform the model pre-trained on the downstream dataset by bringing massive pre-training data (~1.7T).
This not only suggests that our pre-training framework can generalize among >2 datasets but also show its effectiveness in leveraging large-scale resources with different channel configurations to boost the performance further.
| Pre-training dataset (pre-training epoch) | Fine-tuning Accuracy on SEED (%) |
|-----|-----|
| random initialization | 94.59 |
| TUEG (1 epoch) | 94.85 |
| TUEG (2 epoch) | 95.11 |
| SEED | 95.15 |
| TUEG (3 epoch) | 95.29|
2. **Regarding Figure 3 and Figure 4, Reconstruction convergence (W2, Q3)**: To clarify, reconstruction error (Figure 3, 4) is used to measure the reconstruction quality during the *pre-training* stage, and the emotional recognition accuracy is used for downstream task performance after *fine-tuning*. Despite expected correlations between them, the pre-training score does not directly account for the downstream task performance.
As for the convergence problem, Figure 4 demonstrates the reconstruction loss at different experiment settings during the pre-training stage. Combined with Figure 3, we are showing MMM has a more robust EEG understanding ability than MAE by verifying their abilities to reconstruct corrupted EEG segments of different mask ratios, and all the reported results are from converged checkpoints. The reviewer might use 'not converge well' to describe the high loss. We further use the R2 score to demonstrate the reconstruction quality in Figure 3 of the attached PDF file in the global response. Moreover, because of the possible gap between pre-training and downstream tasks, we assume that over-chasing low reconstruction loss would not bring better downstream performance.
3. **More downstream tasks (Q1)**: Good point! Our method should work on different downstream tasks with different channel configurations. However, it is not yet trivial to do broader cross-task transitions because of the challenges such as inconsistent time scope, feature pre-processing, etc. Take the EEG-to-text translation task as an example. Previous work [4] uses eight frequency bands to extract EEG features on ZuCo [5, 6] dataset with inconsistent lengths between EEG segments and textual sentences, while our experiments on SEED use five frequency bands with fixed lengths of time per sample.
Nevertheless, such EEG tasks are still in the scope of EEG understanding, which will be a good future work with the help of our unified hierarchical representation.
4. **Details about Multi-stage pre-training (Q2)**: Multi-stage pre-training is a strategy that alternatively uses different masks in different iterations, and the global random mask and region-wise mask have the same computation cost. Since the total number of training iterations is kept the same during pre-training, there is no trade-off for additional computations.
5. **Regarding Table 2 in the supplementary material (Q3)**: The result of Table 2 in the supplementary material helps confirm our assumption that the different pre-training setting (mask ratio) does not bring too much difference in the downstream task (See Response 2: Reconstruction convergence), i.e., with the help of region-wise tokens, our framework is more robust to high mask ratios.
And we will consider your suggestion of bringing more loss functions in future work.
6. **Way to get results (Q4)**: We follow the experiment setting of the previous work [2, 3], where the results are from the average results of all patients. We will clarify the details of the dataset in the revised version.
[1] Harati A, Lopez S, Obeid I, et al. The TUH EEG CORPUS: A big data resource for automated EEG interpretation[C]//2014 IEEE signal processing in medicine and biology symposium (SPMB). IEEE, 2014: 1-5.
[2] T. Song, W. Zheng, P. Song and Z. Cui, "EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks," in IEEE Transactions on Affective Computing, vol. 11, no. 3, pp. 532-541, 1 July-Sept. 2020, doi: 10.1109/TAFFC.2018.2817622.
[3] Rui Li, Yiting Wang, Wei-Long Zheng, and Bao-Liang Lu. 2022. A Multi-view Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-supervised Learning. In Proceedings of the 30th ACM International Conference on Multimedia (MM '22). Association for Computing Machinery, New York, NY, USA, 6–14.
[4] Wang Z, Ji H. Open vocabulary electroencephalography-to-text decoding and zero-shot sentiment classification[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(5): 5350-5358.
[5] Hollenstein N, Rotsztejn J, Troendle M, et al. ZuCo, a simultaneous EEG and eye-tracking resource for natural sentence reading[J]. Scientific data, 2018, 5(1): 1-13.
[6] Hollenstein N, Troendle M, Zhang C, et al. ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation[C]//Proceedings of the 12th Language Resources and Evaluation Conference. 2020: 138-146.
---
Rebuttal Comment 1.1:
Comment: Thank you for attending to my questions. I have thoroughly read through the rebuttal and decide to raise my score to a 6. The authors have clarified my questions in an elegant manner. Thanks for the great work!
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's constructive feedback and will refine the text as suggested. Thank you sincerely for the improved rating. | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.